🤖 AI Summary
This work proposes CSRO, a novel framework that addresses the limited interpretability and debuggability of traditional multi-agent reinforcement learning by integrating large language models into multi-agent games. For the first time, optimal response computation is reformulated as a code generation task, enabling the direct synthesis of human-readable strategy code through zero-shot prompting and iterative refinement. By coupling this approach with distributed evolutionary mechanisms such as AlphaEvolve, CSRO shifts the paradigm from parameter optimization to algorithmic behavior synthesis. Experimental results demonstrate that the method achieves performance comparable to baseline approaches while producing diverse, transparent, and highly interpretable strategies, thereby substantially enhancing the trustworthiness and comprehensibility of multi-agent systems.
📝 Abstract
Recent advances in multi-agent reinforcement learning, particularly Policy-Space Response Oracles (PSRO), have enabled the computation of approximate game-theoretic equilibria in increasingly complex domains. However, these methods rely on deep reinforcement learning oracles that produce `black-box'neural network policies, making them difficult to interpret, trust or debug. We introduce Code-Space Response Oracles (CSRO), a novel framework that addresses this challenge by replacing RL oracles with Large Language Models (LLMs). CSRO reframes the best response computation as a code generation task, prompting an LLM to generate policies directly as human-readable code. This approach not only yields inherently interpretable policies but also leverages the LLM's pretrained knowledge to discover complex, human-like strategies. We explore multiple ways to construct and enhance an LLM-based oracle: zero-shot prompting, iterative refinement and \emph{AlphaEvolve}, a distributed LLM-based evolutionary system. We demonstrate that CSRO achieves performance competitive with baselines while producing a diverse set of explainable policies. Our work presents a new perspective on multi-agent learning, shifting the focus from optimizing opaque policy parameters to synthesizing interpretable algorithmic behavior.