ELO-Rated Sequence Rewards: Advancing Reinforcement Learning Models

📅 2024-05-17
🏛️ 2024 IEEE 13th Data Driven Control and Learning Systems Conference (DDCLS)
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Long-horizon reinforcement learning (LTRL) suffers from sparse and difficult-to-design scalar reward signals. Method: This paper proposes a novel ordinal-preference-based reward modeling paradigm: replacing stepwise rewards with expert-provided pairwise trajectory preferences, quantifying trajectory quality via an ELO scoring mechanism, and introducing an anchor-free, trajectory-level dynamic reward redistribution algorithm—eliminating reliance on frame-level expert annotations or fixed baseline rewards. It is the first work to integrate economic ordinal utility theory and the ELO rating system into an RL framework, enabling end-to-end preference-driven training. Contribution/Results: On 5,000-step tasks, our method significantly outperforms baselines including PPO and SAC. It achieves high-performance policy learning with only a small number of expert preferences, demonstrating superior sample efficiency and generalization capability.

Technology Category

Application Category

📝 Abstract
Reinforcement Learning (RL) is highly dependent on the meticulous design of the reward function. However, accurately assigning rewards to each state-action pair in Long-Term RL (LTRL) challenges is formidable. Consequently, RL agents are predominantly trained with expert guidance. Drawing on the principles of ordinal utility theory from economics, we propose a novel reward estimation algorithm: ELO-Rating based RL (ERRL). This approach is distinguished by two main features. Firstly, it leverages expert preferences over trajectories instead of cardinal rewards (utilities) to compute the ELO rating of each trajectory as its reward. Secondly, a new reward redistribution algorithm is introduced to mitigate training volatility in the absence of a fixed anchor reward. Our method demonstrates superior performance over several leading baselines in long-term scenarios (extending up to 5000 steps), where conventional RL algorithms falter. Furthermore, we conduct a thorough analysis of how expert preferences affect the outcomes.
Problem

Research questions and friction points this paper is trying to address.

Designing accurate reward functions for long-term reinforcement learning tasks
Overcoming training instability without fixed anchor rewards
Improving performance in long-term scenarios up to 5000 steps
Innovation

Methods, ideas, or system contributions that make the work stand out.

ELO-Rating based Reinforcement Learning (ERRL)
Expert preferences over trajectories for rewards
New reward redistribution algorithm for stability
🔎 Similar Papers
No similar papers found.
Q
Qi Ju
School of Artificial Intelligence and Automation, Huazhong University of Science and Technology, Wuhan 430074, P. R. China
F
Falin Hei
School of Artificial Intelligence and Automation, Huazhong University of Science and Technology, Wuhan 430074, P. R. China
Z
Zhemei Fang
School of Artificial Intelligence and Automation, Huazhong University of Science and Technology, Wuhan 430074, P. R. China
Y
Yunfeng Luo
School of Artificial Intelligence and Automation, Huazhong University of Science and Technology, Wuhan 430074, P. R. China