π€ AI Summary
This work addresses the heavy reliance of large language model reasoning training on high-quality human-annotated data by proposing an unsupervised reinforcement learning framework that requires no external data. Inspired by human collaborative learning in sports, it introduces a novel non-adversarial Coach-Player dual-role mechanism: the Coach dynamically generates mathematical reasoning tasks tailored to the Playerβs current capability, while the Player improves through solving these tasks, forming a closed-loop iterative optimization process. Integrating multi-agent collaboration, dynamic instruction generation, and a reward mechanism based on performance progression, the approach achieves a 4.9% absolute gain in overall accuracy and a 5.4% improvement on out-of-distribution benchmarks on Qwen2.5-Math-7B-Instruct, significantly outperforming existing unsupervised methods such as RENT and R-zero.
π Abstract
Large Language Models (LLMs) have demonstrated strong potential in complex reasoning, yet their progress remains fundamentally constrained by reliance on massive high-quality human-curated tasks and labels, either through supervised fine-tuning (SFT) or reinforcement learning (RL) on reasoning-specific data. This dependence renders supervision-heavy training paradigms increasingly unsustainable, with signs of diminishing scalability already evident in practice. To overcome this limitation, we introduce CPM\"obius (CPMobius), a collaborative Coach-Player paradigm for data-free reinforcement learning of reasoning models. Unlike traditional adversarial self-play, CPM\"obius, inspired by real world human sports collaboration and multi-agent collaboration, treats the Coach and Player as independent but cooperative roles. The Coach proposes instructions targeted at the Player's capability and receives rewards based on changes in the Player's performance, while the Player is rewarded for solving the increasingly instructive tasks generated by the Coach. This cooperative optimization loop is designed to directly enhance the Player's mathematical reasoning ability. Remarkably, CPM\"obius achieves substantial improvement without relying on any external training data, outperforming existing unsupervised approaches. For example, on Qwen2.5-Math-7B-Instruct, our method improves accuracy by an overall average of +4.9 and an out-of-distribution average of +5.4, exceeding RENT by +1.5 on overall accuracy and R-zero by +4.2 on OOD accuracy.