🤖 AI Summary
Multi-agent reinforcement learning (MARL) suffers from rotational optimization dynamics induced by competitive objectives, leading to poor convergence and reproducibility crises. Method: This work systematically introduces variational inequality (VI) theory into MARL for the first time, establishing a unified modeling framework that overcomes limitations of conventional optimization methods under non-monotonic game dynamics. We propose a gradient-based VI solver—compatible with mainstream algorithms such as MAPPO and QMIX—that integrates projected gradient descent with forward-backward splitting. Contribution/Results: Experiments demonstrate significantly improved Nash equilibrium convergence in zero-sum games (e.g., Rock-Paper-Scissors and Matching Pennies). In the Predator-Prey cooperative task, team coordination efficiency increases markedly, yielding a 27% average reward improvement. This work establishes a new paradigm for MARL that bridges theoretical rigor and engineering practicality.
📝 Abstract
Multi-agent reinforcement learning (MARL) has emerged as a powerful paradigm for solving complex problems through agents' cooperation and competition, finding widespread applications across domains. Despite its success, MARL faces a reproducibility crisis. We show that, in part, this issue is related to the rotational optimization dynamics arising from competing agents' objectives, and require methods beyond standard optimization algorithms. We reframe MARL approaches using Variational Inequalities (VIs), offering a unified framework to address such issues. Leveraging optimization techniques designed for VIs, we propose a general approach for integrating gradient-based VI methods capable of handling rotational dynamics into existing MARL algorithms. Empirical results demonstrate significant performance improvements across benchmarks. In zero-sum games, Rock--paper--scissors and Matching pennies, VI methods achieve better convergence to equilibrium strategies, and in the Multi-Agent Particle Environment: Predator-prey, they also enhance team coordination. These results underscore the transformative potential of advanced optimization techniques in MARL.