Beyond expected value: geometric mean optimization for long-term policy performance in reinforcement learning

📅 2025-08-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Conventional reinforcement learning optimizes the expected cumulative reward, neglecting the long-term stability of individual trajectories. This oversight can lead to policies that perform well in expectation but exhibit poor asymptotic behavior on specific trajectories. Method: We argue that the long-term performance of a single trajectory should be characterized by its time-average growth rate, and we introduce the first Bellman operator tailored to this objective. To estimate this growth rate robustly, we design an N-sliding-window-based corrected geometric mean estimator, which is incorporated as a regularizer into the policy optimization objective. Contribution/Results: Our approach explicitly models and optimizes the asymptotic growth rate of individual trajectories while preserving the standard RL framework. Experiments in highly uncertain simulation environments demonstrate that our algorithm significantly improves long-term policy robustness and single-trajectory performance, outperforming established baselines.

Technology Category

Application Category

📝 Abstract
Reinforcement learning (RL) algorithms typically optimize the expected cumulative reward, i.e., the expected value of the sum of scalar rewards an agent receives over the course of a trajectory. The expected value averages the performance over an infinite number of trajectories. However, when deploying the agent in the real world, this ensemble average may be uninformative for the performance of individual trajectories. Thus, in many applications, optimizing the long-term performance of individual trajectories might be more desirable. In this work, we propose a novel RL algorithm that combines the standard ensemble average with the time-average growth rate, a measure for the long-term performance of individual trajectories. We first define the Bellman operator for the time-average growth rate. We then show that, under multiplicative reward dynamics, the geometric mean aligns with the time-average growth rate. To address more general and unknown reward dynamics, we propose a modified geometric mean with $N$-sliding window that captures the path-dependency as an estimator for the time-average growth rate. This estimator is embedded as a regularizer into the objective, forming a practical algorithm and enabling the policy to benefit from ensemble average and time-average simultaneously. We evaluate our algorithm in challenging simulations, where it outperforms conventional RL methods.
Problem

Research questions and friction points this paper is trying to address.

Optimizing long-term individual trajectory performance in RL
Addressing limitations of expected cumulative reward optimization
Combining ensemble average with time-average growth rate
Innovation

Methods, ideas, or system contributions that make the work stand out.

Geometric mean optimization for long-term performance
Bellman operator for time-average growth rate
Modified geometric mean with sliding window regularizer
🔎 Similar Papers
No similar papers found.
X
Xinyi Sheng
Cyber-physical Systems Group, Aalto University, Espoo, Finland
Dominik Baumann
Dominik Baumann
Aalto University, Espoo, Finland
Control TheoryRoboticsMachine LearningMulti-agent Systems