CPMobius: Iterative Coach-Player Reasoning for Data-Free Reinforcement Learning

πŸ“… 2026-02-03
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the heavy reliance of large language model reasoning training on high-quality human-annotated data by proposing an unsupervised reinforcement learning framework that requires no external data. Inspired by human collaborative learning in sports, it introduces a novel non-adversarial Coach-Player dual-role mechanism: the Coach dynamically generates mathematical reasoning tasks tailored to the Player’s current capability, while the Player improves through solving these tasks, forming a closed-loop iterative optimization process. Integrating multi-agent collaboration, dynamic instruction generation, and a reward mechanism based on performance progression, the approach achieves a 4.9% absolute gain in overall accuracy and a 5.4% improvement on out-of-distribution benchmarks on Qwen2.5-Math-7B-Instruct, significantly outperforming existing unsupervised methods such as RENT and R-zero.

Technology Category

Application Category

πŸ“ Abstract
Large Language Models (LLMs) have demonstrated strong potential in complex reasoning, yet their progress remains fundamentally constrained by reliance on massive high-quality human-curated tasks and labels, either through supervised fine-tuning (SFT) or reinforcement learning (RL) on reasoning-specific data. This dependence renders supervision-heavy training paradigms increasingly unsustainable, with signs of diminishing scalability already evident in practice. To overcome this limitation, we introduce CPM\"obius (CPMobius), a collaborative Coach-Player paradigm for data-free reinforcement learning of reasoning models. Unlike traditional adversarial self-play, CPM\"obius, inspired by real world human sports collaboration and multi-agent collaboration, treats the Coach and Player as independent but cooperative roles. The Coach proposes instructions targeted at the Player's capability and receives rewards based on changes in the Player's performance, while the Player is rewarded for solving the increasingly instructive tasks generated by the Coach. This cooperative optimization loop is designed to directly enhance the Player's mathematical reasoning ability. Remarkably, CPM\"obius achieves substantial improvement without relying on any external training data, outperforming existing unsupervised approaches. For example, on Qwen2.5-Math-7B-Instruct, our method improves accuracy by an overall average of +4.9 and an out-of-distribution average of +5.4, exceeding RENT by +1.5 on overall accuracy and R-zero by +4.2 on OOD accuracy.
Problem

Research questions and friction points this paper is trying to address.

data-free reinforcement learning
large language models
reasoning
supervision dependency
scalability
Innovation

Methods, ideas, or system contributions that make the work stand out.

data-free reinforcement learning
coach-player collaboration
mathematical reasoning
unsupervised reasoning
multi-agent cooperation
πŸ”Ž Similar Papers
No similar papers found.
R
Ran Li
Department of Computer Science and Technology, Tsinghua University
Z
Zeyuan Liu
Department of Computer Science and Technology, Tsinghua University
Y
Yinghao Chen
Department of Computer Science and Technology, Tsinghua University
Bingxiang He
Bingxiang He
Second year PhD Candidate, Tsinghua University
Natural Language Processing
J
Jiarui Yuan
Department of Computer Science and Technology, Tsinghua University
Zixuan Fu
Zixuan Fu
Nanyang Technological University
Image RestorationGenerative ModelsLow-level Vision
Weize Chen
Weize Chen
Tsinghua University
NLPML
J
Jinyi Hu
Department of Computer Science and Technology, Tsinghua University
Zhiyuan Liu
Zhiyuan Liu
Tsinghua University
autonomous drivingtraffic simulation
Maosong Sun
Maosong Sun
Professor of Computer Science and Technology, Tsinghua University
Natural Language ProcessingArtificial IntelligenceSocial Computing