Ergodicity in reinforcement learning

πŸ“… 2026-03-11
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses a fundamental limitation of conventional reinforcement learning, which optimizes expected cumulative reward but fails to accurately capture the long-term performance of individual infinite-horizon trajectories in non-ergodic reward environments. Through theoretical examples, the paper systematically analyzes the impact of non-ergodic reward processes and demonstrates the inadequacy of standard objectives in reflecting real-world agent behavior under deployment. Leveraging ergodic Markov chain theory, the study reframes the evaluation of agent performance around individual trajectory outcomes rather than ensemble averages. Building on this insight, the authors synthesize and unify existing approaches into a coherent optimization framework explicitly designed to guarantee robust long-term performance along single trajectories. This framework provides both theoretical grounding and practical guidance for designing reinforcement learning objectives tailored to non-ergodic settings.

Technology Category

Application Category

πŸ“ Abstract
In reinforcement learning, we typically aim to optimize the expected value of the sum of rewards an agent collects over a trajectory. However, if the process generating these rewards is non-ergodic, the expected value, i.e., the average over infinitely many trajectories with a given policy, is uninformative for the average over a single, but infinitely long trajectory. Thus, if we care about how the individual agent performs during deployment, the expected value is not a good optimization objective. In this paper, we discuss the impact of non-ergodic reward processes on reinforcement learning agents through an instructive example, relate the notion of ergodic reward processes to more widely used notions of ergodic Markov chains, and present existing solutions that optimize long-term performance of individual trajectories under non-ergodic reward dynamics.
Problem

Research questions and friction points this paper is trying to address.

ergodicity
reinforcement learning
non-ergodic reward processes
expected value
long-term performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

non-ergodicity
reinforcement learning
ergodicity
trajectory optimization
long-term performance
πŸ”Ž Similar Papers
No similar papers found.
Dominik Baumann
Dominik Baumann
Aalto University, Espoo, Finland
Control TheoryRoboticsMachine LearningMulti-agent Systems
Erfaun Noorani
Erfaun Noorani
MIT Lincoln Laboratory, University of Maryland College Park
Control TheoryReinforcement LearningDecision Theory
Arsenii Mustafin
Arsenii Mustafin
PhD student, Boston University
Reinforcement LearningExplainable AI
X
Xinyi Sheng
Cyber-physical Systems Group, Aalto University, Espoo 02150, Finland
B
Bert Verbruggen
Data Analytics Lab, Vrije Universiteit Brussel, Brussel 1050, Belgium
A
Arne Vanhoyweghen
Data Analytics Lab, Vrije Universiteit Brussel, Brussel 1050, Belgium
Vincent Ginis
Vincent Ginis
Vrije Universiteit Brussel / Harvard University
Physics | Machine Learning
T
Thomas B. SchΓΆn
Department of Information Technology, Uppsala University, 75105 Uppsala, Sweden