🤖 AI Summary
Current LLM reasoning training heavily relies on human-annotated data, while synthetic or distillation-based approaches suffer from unstable quality and poor adaptability to evolving model capabilities.
Method: We propose the first data-free, tri-agent co-evolutionary framework—comprising Teacher, Solver, and Generator agents—that enables autonomous reasoning capability development from merely 100 seed problems. It integrates closed-loop preference feedback learning, dynamic curriculum generation, and policy distillation to jointly advance model competence and task difficulty without human annotation or pre-defined task repositories.
Contribution/Results: Evaluated on seven mathematical reasoning benchmarks, our method achieves an average +20.2 percentage-point improvement over baselines. The resulting student model surpasses commercial LLMs—including GPT-5 and Claude-4—demonstrating a significant breakthrough in overcoming the data dependency bottleneck in reasoning training.
📝 Abstract
Recent breakthroughs in large language models (LLMs) on reasoning tasks rely heavily on massive, high-quality datasets-typically human-annotated and thus difficult to scale. While data synthesis or distillation offers a promising alternative, existing methods struggle with inconsistent data quality and an inability to dynamically adapt to the evolving capabilities of the model, leading to suboptimal training signals. To address these limitations, we introduce Socratic-Zero, a fully autonomous framework that generates high-quality training data from minimal seed examples through the co-evolution of three agents: the Teacher, the Solver, and the Generator. The Solver continuously refines its reasoning by learning from preference feedback on both successful and failed trajectories; the Teacher adaptively crafts increasingly challenging questions based on the Solver's weaknesses; and the Generator distills the Teacher's question-design strategy to enable scalable, high-fidelity curriculum generation. This closed-loop system produces a self-improving curriculum-requiring no pre-existing tasks or labels. Remarkably, starting from only 100 seed questions, our Socratic-Solver-8B achieves an average gain of +20.2 percentage points over prior data synthesis methods across seven mathematical reasoning benchmarks (AMC23, AIME24-25, Olympiad, MATH-500, Minerva, and GSM8K), with consistent gains on both Qwen3 and GLM4 series models. Even more surprisingly, synthetic data from Socratic-Generator-32B enables student LLMs to achieve superior performance compared to other state-of-the-art (SOTA) commercial LLMs on these benchmarks, including Qwen3-235B-A22B, DeepSeek-V3.1-671B, GPT-5, Gemini-2.5-Pro, Grok-4, and Claude-4.1-Opus.