🤖 AI Summary
In multi-agent reinforcement learning (MARL), population-based methods (e.g., PSRO) suffer from scalability bottlenecks—quadratic computational complexity and linear memory overhead—due to explicit policy population storage and full payoff matrix construction. To address this, we propose a generative evolutionary meta-solver: it implicitly represents the policy population via latent-variable anchors and a conditional generator, augmented with an adaptive expansion mechanism, thereby eliminating the need for explicit populations or payoff matrices while preserving theoretical Nash equilibrium convergence guarantees. Policy optimization integrates Monte Carlo rollouts, multiplicative-weights meta-dynamics, a model-free Bernstein UCB oracle, and an advantage-based trust-region objective. Experiments across diverse games demonstrate up to 6× speedup, 1.3× memory reduction, and significantly improved rewards—validating both efficiency and scalability.
📝 Abstract
Scalable multi-agent reinforcement learning (MARL) remains a central challenge for AI. Existing population-based methods, like Policy-Space Response Oracles, PSRO, require storing explicit policy populations and constructing full payoff matrices, incurring quadratic computation and linear memory costs. We present Generative Evolutionary Meta-Solver (GEMS), a surrogate-free framework that replaces explicit populations with a compact set of latent anchors and a single amortized generator. Instead of exhaustively constructing the payoff matrix, GEMS relies on unbiased Monte Carlo rollouts, multiplicative-weights meta-dynamics, and a model-free empirical-Bernstein UCB oracle to adaptively expand the policy set. Best responses are trained within the generator using an advantage-based trust-region objective, eliminating the need to store and train separate actors. We evaluated GEMS in a variety of Two-player and Multi-Player games such as the Deceptive Messages Game, Kuhn Poker and Multi-Particle environment. We find that GEMS is up to ~6x faster, has 1.3x less memory usage than PSRO, while also reaps higher rewards simultaneously. These results demonstrate that GEMS retains the game theoretic guarantees of PSRO, while overcoming its fundamental inefficiencies, hence enabling scalable multi-agent learning in multiple domains.