COvolve: Adversarial Co-Evolution of Large-Language-Model-Generated Policies and Environments via Two-Player Zero-Sum Game

📅 2026-03-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Static or manually designed training environments constrain agents’ continual learning and generalization capabilities. This work proposes an adversarial co-evolution framework grounded in large language models (LLMs), formulating the joint generation of environments and policies as a two-player zero-sum game. By leveraging mixed-strategy Nash equilibria, the approach ensures robust, non-catastrophic learning dynamics. For the first time, it enables the automatic generation of incrementally challenging curricula in the form of executable Python code, without requiring predefined task distributions. Evaluated on urban driving, symbolic maze solving, and geometric navigation tasks, the method successfully evolves environments of progressively increasing complexity, demonstrating the feasibility and promise of open-ended continual learning.
📝 Abstract
A central challenge in building continually improving agents is that training environments are typically static or manually constructed. This restricts continual learning and generalization beyond the training distribution. We address this with COvolve, a co-evolutionary framework that leverages large language models (LLMs) to generate both environments and agent policies, expressed as executable Python code. We model the interaction between environment and policy designers as a two-player zero-sum game, ensuring adversarial co-evolution in which environments expose policy weaknesses and policies adapt in response. This process induces an automated curriculum in which environments and policies co-evolve toward increasing complexity. To guarantee robustness and prevent forgetting as the curriculum progresses, we compute the mixed-strategy Nash equilibrium (MSNE) of the zero-sum game, thereby yielding a meta-policy. This MSNE meta-policy ensures that the agent does not forget to solve previously seen environments while learning to solve previously unseen ones. Experiments in urban driving, symbolic maze-solving, and geometric navigation showcase that COvolve produces progressively more complex environments. Our results demonstrate the potential of LLM-driven co-evolution to achieve open-ended learning without predefined task distributions or manual intervention.
Problem

Research questions and friction points this paper is trying to address.

continual learning
generalization
training environment
static environments
open-ended learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

adversarial co-evolution
large language models
zero-sum game
mixed-strategy Nash equilibrium
open-ended learning