🤖 AI Summary
Monotonic value decomposition in centralized training with decentralized execution (CTDE) multi-agent reinforcement learning often underestimates optimal action-values, leading to suboptimal policies. Method: We propose an optimistic ε-greedy exploration mechanism that integrates an optimistic update network into the ε-greedy framework—marking the first such incorporation—to dynamically identify potentially optimal actions during decentralized execution and adaptively increase their sampling probability, thereby correcting value estimation bias at the exploration level. Our approach combines monotonic value decomposition, an optimistic action identification network, and a probabilistic ε-resampling strategy within the CTDE paradigm. Results: Evaluated on multiple standard multi-agent benchmarks, our method significantly outperforms baselines including QMIX and QPLEX, achieving an average task completion rate improvement of 12.7% and effectively avoiding convergence to suboptimal policies.
📝 Abstract
The Centralized Training with Decentralized Execution (CTDE) paradigm is widely used in cooperative multi-agent reinforcement learning. However, due to the representational limitations of traditional monotonic value decomposition methods, algorithms can underestimate optimal actions, leading policies to suboptimal solutions. To address this challenge, we propose Optimistic $epsilon$-Greedy Exploration, focusing on enhancing exploration to correct value estimations. The underestimation arises from insufficient sampling of optimal actions during exploration, as our analysis indicated. We introduce an optimistic updating network to identify optimal actions and sample actions from its distribution with a probability of $epsilon$ during exploration, increasing the selection frequency of optimal actions. Experimental results in various environments reveal that the Optimistic $epsilon$-Greedy Exploration effectively prevents the algorithm from suboptimal solutions and significantly improves its performance compared to other algorithms.