🤖 AI Summary
Long-horizon reinforcement learning (LTRL) suffers from sparse and difficult-to-design scalar reward signals. Method: This paper proposes a novel ordinal-preference-based reward modeling paradigm: replacing stepwise rewards with expert-provided pairwise trajectory preferences, quantifying trajectory quality via an ELO scoring mechanism, and introducing an anchor-free, trajectory-level dynamic reward redistribution algorithm—eliminating reliance on frame-level expert annotations or fixed baseline rewards. It is the first work to integrate economic ordinal utility theory and the ELO rating system into an RL framework, enabling end-to-end preference-driven training. Contribution/Results: On 5,000-step tasks, our method significantly outperforms baselines including PPO and SAC. It achieves high-performance policy learning with only a small number of expert preferences, demonstrating superior sample efficiency and generalization capability.
📝 Abstract
Reinforcement Learning (RL) is highly dependent on the meticulous design of the reward function. However, accurately assigning rewards to each state-action pair in Long-Term RL (LTRL) challenges is formidable. Consequently, RL agents are predominantly trained with expert guidance. Drawing on the principles of ordinal utility theory from economics, we propose a novel reward estimation algorithm: ELO-Rating based RL (ERRL). This approach is distinguished by two main features. Firstly, it leverages expert preferences over trajectories instead of cardinal rewards (utilities) to compute the ELO rating of each trajectory as its reward. Secondly, a new reward redistribution algorithm is introduced to mitigate training volatility in the absence of a fixed anchor reward. Our method demonstrates superior performance over several leading baselines in long-term scenarios (extending up to 5000 steps), where conventional RL algorithms falter. Furthermore, we conduct a thorough analysis of how expert preferences affect the outcomes.