🤖 AI Summary
Conventional reinforcement learning optimizes the expected cumulative reward, neglecting the long-term stability of individual trajectories. This oversight can lead to policies that perform well in expectation but exhibit poor asymptotic behavior on specific trajectories.
Method: We argue that the long-term performance of a single trajectory should be characterized by its time-average growth rate, and we introduce the first Bellman operator tailored to this objective. To estimate this growth rate robustly, we design an N-sliding-window-based corrected geometric mean estimator, which is incorporated as a regularizer into the policy optimization objective.
Contribution/Results: Our approach explicitly models and optimizes the asymptotic growth rate of individual trajectories while preserving the standard RL framework. Experiments in highly uncertain simulation environments demonstrate that our algorithm significantly improves long-term policy robustness and single-trajectory performance, outperforming established baselines.
📝 Abstract
Reinforcement learning (RL) algorithms typically optimize the expected cumulative reward, i.e., the expected value of the sum of scalar rewards an agent receives over the course of a trajectory. The expected value averages the performance over an infinite number of trajectories. However, when deploying the agent in the real world, this ensemble average may be uninformative for the performance of individual trajectories. Thus, in many applications, optimizing the long-term performance of individual trajectories might be more desirable. In this work, we propose a novel RL algorithm that combines the standard ensemble average with the time-average growth rate, a measure for the long-term performance of individual trajectories. We first define the Bellman operator for the time-average growth rate. We then show that, under multiplicative reward dynamics, the geometric mean aligns with the time-average growth rate. To address more general and unknown reward dynamics, we propose a modified geometric mean with $N$-sliding window that captures the path-dependency as an estimator for the time-average growth rate. This estimator is embedded as a regularizer into the objective, forming a practical algorithm and enabling the policy to benefit from ensemble average and time-average simultaneously. We evaluate our algorithm in challenging simulations, where it outperforms conventional RL methods.