Speaker: Prof. Luke Ong, NTU Singapore
Time: 15:00 p.m., Apr 11, 2025, GMT+8
Venue: Room 1131, Science Building #1 (Yanyuan)
Abstract:
Linear temporal logic (LTL) and, more generally, ω-regular objectives are alternatives to the traditional discount sum and average reward objectives in reinforcement learning (RL), offering the advantage of greater comprehensibility and hence explainability. In this talk, I will discuss the relationship between these objectives. Our main result is that each RL problem for ω-regular objectives can be reduced to a limit-average reward problem in an optimality-preserving fashion, via (finite-memory) reward machines. Furthermore, we demonstrate the efficacy of this approach by showing that optimal policies for limit-average problems can be found asymptotically by solving a sequence of discount-sum problems approximately. Consequently, we resolve an open problem: optimal policies for reinforcement learning with LTL and ω-regular objectives can be learned asymptotically. I will relate these results to safe reinforcement learning and end with some general remarks about AI Safety.
Source: School of Computer Science, PKU