ES-C51: Expected Sarsa Based C51 Distributional Reinforcement Learning Algorithm

📅 2025-10-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the instability of C51 in distributional reinforcement learning when action expected returns are similar, this paper proposes Expected Sarsa–C51 (ES-C51). ES-C51 integrates Expected Sarsa into the C51 framework by replacing the greedy Bellman update with a softmax-weighted aggregation over action-value distributions, thereby incorporating information from all actions, mitigating policy oscillation, and enhancing the robustness of distributional estimates. It preserves C51’s capability to model value distributions discretely while improving training stability through smoothed policy evaluation. Empirical evaluation on Gym classical control tasks and the Atari-10 benchmark demonstrates that ES-C51 significantly outperforms Q-learning–based C51 variants (QL-C51), validating its effectiveness in achieving both improved stability and superior performance in distributional RL.

Technology Category

Application Category

📝 Abstract
In most value-based reinforcement learning (RL) algorithms, the agent estimates only the expected reward for each action and selects the action with the highest reward. In contrast, Distributional Reinforcement Learning (DRL) estimates the entire probability distribution of possible rewards, providing richer information about uncertainty and variability. C51 is a popular DRL algorithm for discrete action spaces. It uses a Q-learning approach, where the distribution is learned using a greedy Bellman update. However, this can cause problems if multiple actions at a state have similar expected reward but with different distributions, as the algorithm may not learn a stable distribution. This study presents a modified version of C51 (ES-C51) that replaces the greedy Q-learning update with an Expected Sarsa update, which uses a softmax calculation to combine information from all possible actions at a state rather than relying on a single best action. This reduces instability when actions have similar expected rewards and allows the agent to learn higher-performing policies. This approach is evaluated on classic control environments from Gym, and Atari-10 games. For a fair comparison, we modify the standard C51's exploration strategy from e-greedy to softmax, which we refer to as QL-C51 (Q- Learning based C51). The results demonstrate that ES-C51 outperforms QL-C51 across many environments.
Problem

Research questions and friction points this paper is trying to address.

Improves C51 algorithm stability for similar reward distributions
Replaces greedy Q-learning with Expected Sarsa softmax updates
Enhances policy performance in control and Atari environments
Innovation

Methods, ideas, or system contributions that make the work stand out.

Replaces greedy Q-learning with Expected Sarsa update
Uses softmax to combine information from all actions
Reduces instability when actions have similar rewards
🔎 Similar Papers
No similar papers found.
R
Rijul Tandon
UIET, Panjab University , Chandigarh , India
Peter Vamplew
Peter Vamplew
Professor, Information Technology, Federation University Australia
reinforcement learningmulti-objective reinforcement learningresponsible AIAI safety
C
Cameron Foale
Federation University, Victoria, Australia