SAGE: Multi-Agent Self-Evolution for LLM Reasoning

📅 2026-03-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitations of existing reinforcement learning–based approaches for large language model (LLM) reasoning, which often rely on extensive human annotations and suffer from unstable long-horizon multi-step reasoning due to the absence of explicit planning and quality control in self-play strategies. To overcome these challenges, the authors propose SAGE, a closed-loop multi-agent self-evolution framework that introduces, for the first time, a collaborative four-role mechanism comprising a Challenger, Planner, Solver, and Critic. Requiring only minimal seed data, SAGE enables stable curriculum evolution through iterative task generation, structured planning, solution synthesis, and critical filtering. Integrating multi-agent reinforcement learning, self-generated curriculum learning, and an external verifier–driven reward mechanism, SAGE achieves substantial performance gains on mathematical and code generation benchmarks, improving Qwen-2.5-7B by 8.9% on LiveCodeBench and 10.7% on OlympiadBench.

Technology Category

Application Category

📝 Abstract
Reinforcement learning with verifiable rewards improves reasoning in large language models (LLMs), but many methods still rely on large human-labeled datasets. While self-play reduces this dependency, it often lacks explicit planning and strong quality control, limiting stability in long-horizon multi-step reasoning. We present SAGE (Self-evolving Agents for Generalized reasoning Evolution), a closed-loop framework where four agents: Challenger, Planner, Solver, and Critic, co-evolve from a shared LLM backbone using only a small seed set. The Challenger continuously generates increasingly difficult tasks; the Planner converts each task into a structured multi-step plan; and the Solver follows the plan to produce an answer, whose correctness is determined by external verifiers. The Critic scores and filters both generated questions and plans to prevent curriculum drift and maintain training signal quality, enabling stable self-training. Across mathematics and code-generation benchmarks, SAGE delivers consistent gains across model scales, improving the Qwen-2.5-7B model by 8.9% on LiveCodeBench and 10.7% on OlympiadBench.
Problem

Research questions and friction points this paper is trying to address.

reinforcement learning
self-play
multi-step reasoning
quality control
training stability
Innovation

Methods, ideas, or system contributions that make the work stand out.

multi-agent self-evolution
closed-loop reinforcement learning
curriculum control
verifiable reasoning
LLM self-training
Y
Yulin Peng
College of Computer Science and Software Engineering, Shenzhen University, China
X
Xinxin Zhu
College of Computer Science and Software Engineering, Shenzhen University, China; Guangdong Laboratory of Artificial Intelligence and Digital Economy (SZ), China
Chenxing Wei
Chenxing Wei
Shenzhen University
nlp
N
Nianbo Zeng
College of Computer Science and Software Engineering, Shenzhen University, China; Guangdong Laboratory of Artificial Intelligence and Digital Economy (SZ), China
L
Leilei Wang
College of Computer Science and Software Engineering, Shenzhen University, China; Guangdong Laboratory of Artificial Intelligence and Digital Economy (SZ), China
Y
Ying Tiffany He
College of Computer Science and Software Engineering, Shenzhen University, China
F. Richard Yu
F. Richard Yu
Carleton University, FRSC, FCAE, MAE, FIEEE, FEIC
Intell.&Auto. Sys.ML&Embodied AIIoTBlockchain