🤖 AI Summary
Value decomposition–based multi-agent reinforcement learning often converges to suboptimal equilibria, hindering performance improvement. This work introduces equilibrium stability theory to formally characterize the convergence behavior of such methods and proposes a Multi-Round Value Factorization (MRVF) framework that iteratively eliminates suboptimal actions through a dynamic destabilization mechanism, thereby approaching the global optimum. By integrating non-negative return increments with multi-round optimization, MRVF significantly outperforms state-of-the-art algorithms on complex benchmarks including Predator-Prey and SMAC, demonstrating both the validity of the theoretical analysis and the effectiveness of the proposed framework.
📝 Abstract
Value factorization, a popular paradigm in MARL, faces significant theoretical and algorithmic bottlenecks: its tendency to converge to suboptimal solutions remains poorly understood and unsolved. Theoretically, existing analyses fail to explain this due to their primary focus on the optimal case. To bridge this gap, we introduce a novel theoretical concept: the stable point, which characterizes the potential convergence of value factorization in general cases. Through an analysis of stable point distributions in existing methods, we reveal that non-optimal stable points are the primary cause of poor performance. However, algorithmically, making the optimal action the unique stable point is nearly infeasible. In contrast, iteratively filtering suboptimal actions by rendering them unstable emerges as a more practical approach for global optimality. Inspired by this, we propose a novel Multi-Round Value Factorization (MRVF) framework. Specifically, by measuring a non-negative payoff increment relative to the previously selected action, MRVF transforms inferior actions into unstable points, thereby driving each iteration toward a stable point with a superior action. Experiments on challenging benchmarks, including predator-prey tasks and StarCraft II Multi-Agent Challenge (SMAC), validate our analysis of stable points and demonstrate the superiority of MRVF over state-of-the-art methods.