π€ AI Summary
This work addresses a fundamental limitation of conventional reinforcement learning, which optimizes expected cumulative reward but fails to accurately capture the long-term performance of individual infinite-horizon trajectories in non-ergodic reward environments. Through theoretical examples, the paper systematically analyzes the impact of non-ergodic reward processes and demonstrates the inadequacy of standard objectives in reflecting real-world agent behavior under deployment. Leveraging ergodic Markov chain theory, the study reframes the evaluation of agent performance around individual trajectory outcomes rather than ensemble averages. Building on this insight, the authors synthesize and unify existing approaches into a coherent optimization framework explicitly designed to guarantee robust long-term performance along single trajectories. This framework provides both theoretical grounding and practical guidance for designing reinforcement learning objectives tailored to non-ergodic settings.
π Abstract
In reinforcement learning, we typically aim to optimize the expected value of the sum of rewards an agent collects over a trajectory. However, if the process generating these rewards is non-ergodic, the expected value, i.e., the average over infinitely many trajectories with a given policy, is uninformative for the average over a single, but infinitely long trajectory. Thus, if we care about how the individual agent performs during deployment, the expected value is not a good optimization objective. In this paper, we discuss the impact of non-ergodic reward processes on reinforcement learning agents through an instructive example, relate the notion of ergodic reward processes to more widely used notions of ergodic Markov chains, and present existing solutions that optimize long-term performance of individual trajectories under non-ergodic reward dynamics.