R-Diverse: Mitigating Diversity Illusion in Self-Play LLM Training

📅 2026-02-13
📈 Citations: 0
Influential: 0
📄 PDF

Technology Category

Application Category

📝 Abstract
Self-play bootstraps LLM reasoning through an iterative Challenger-Solver loop: the Challenger is trained to generate questions that target the Solver's capabilities, and the Solver is optimized on the generated data to expand its reasoning skills. However, existing frameworks like R-Zero often exhibit non-sustained improvement, where early gains degrade as self-play continues. We identify a key failure mode, Diversity Illusion, where the Solver's training signals appear diverse yet collapse into recurring underlying patterns. It manifests as (1) Local Diversity Illusion, where diversity is enforced only within-batch, inducing cross-iteration mode cycling; and (2) Surface Diversity Illusion, where questions vary superficially but require near-identical reasoning skills. To mitigate them, we propose R-Diverse with two aligned innovations: Memory-Augmented Penalty (MAP), which uses a persistent memory bank to discourage recycling across iterations, and Skill-Aware Measurement (SAM), which evaluates diversity by the reasoning skills exercised rather than surface variation of questions. Across 10 math and general reasoning benchmarks, R-Diverse sustains gains over more iterations and consistently outperforms prior self-play methods. Code is available at https://github.com/Gengsheng-Li/R-Diverse.
Problem

Research questions and friction points this paper is trying to address.

Diversity Illusion
Self-Play
LLM Training
Reasoning Skills
Training Signal Collapse
Innovation

Methods, ideas, or system contributions that make the work stand out.

Self-play
Diversity Illusion
Memory-Augmented Penalty
Skill-Aware Measurement
LLM Training
🔎 Similar Papers
No similar papers found.
G
Gengsheng Li
Foundation Model Research Center, Institute of Automation, Chinese Academy of Sciences; School of Artificial Intelligence, University of Chinese Academy of Sciences
J
Jinghan He
Foundation Model Research Center, Institute of Automation, Chinese Academy of Sciences; School of Artificial Intelligence, University of Chinese Academy of Sciences
S
Shijie Wang
Foundation Model Research Center, Institute of Automation, Chinese Academy of Sciences; School of Artificial Intelligence, University of Chinese Academy of Sciences
D
Dan Zhang
National University of Singapore
Ruiqi Liu
Ruiqi Liu
Texas Tech University
nonparametric methodsmachine learningeconometrics
Renrui Zhang
Renrui Zhang
Seed ByteDance & MMLab & PKU
Large Multimodal ModelGenerative ModelEmbodied AI
Zijun Yao
Zijun Yao
Department of Computer Science and Technology, Tsinghua University
Natural Language ProcessingKnowledge EngineeringQuestion AnsweringKnowledge Reasoning
Junfeng Fang
Junfeng Fang
National University of Singapore
Model EditingAI SafetyLLM ExplainabilityAI4Science
Haiyun Guo
Haiyun Guo
Rice University ECE Ph.D.
optical imagingcomputational photographyMetalens
J
Jinqiao Wang
Foundation Model Research Center, Institute of Automation, Chinese Academy of Sciences; School of Artificial Intelligence, University of Chinese Academy of Sciences; Wuhan AI Research