Diversity-Aware Policy Optimization for Large Language Model Reasoning

📅 2025-05-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
The impact of solution diversity on reasoning capability remains unexplored in existing reinforcement learning (RL) training paradigms for large language models (LLMs). Method: This work establishes, for the first time, a strong positive correlation between solution diversity and reasoning potential (Potential@k), and proposes a diversity-aware policy optimization framework. It introduces a computationally tractable, token-level diversity objective and applies selective regularization exclusively on positive samples—thereby preserving negative-sample learning integrity. The method is implemented efficiently atop the R1-zero RL framework. Contribution/Results: On four mathematical reasoning benchmarks, our approach achieves an average +3.5% improvement in performance while significantly enhancing both solution diversity and robustness. Extensive experiments across 12 mainstream LLMs validate the method’s generalizability and effectiveness.

Technology Category

Application Category

📝 Abstract
The reasoning capabilities of large language models (LLMs) have advanced rapidly, particularly following the release of DeepSeek R1, which has inspired a surge of research into data quality and reinforcement learning (RL) algorithms. Despite the pivotal role diversity plays in RL, its influence on LLM reasoning remains largely underexplored. To bridge this gap, this work presents a systematic investigation into the impact of diversity in RL-based training for LLM reasoning, and proposes a novel diversity-aware policy optimization method. Across evaluations on 12 LLMs, we observe a strong positive correlation between the solution diversity and Potential at k (a novel metric quantifying an LLM's reasoning potential) in high-performing models. This finding motivates our method to explicitly promote diversity during RL training. Specifically, we design a token-level diversity and reformulate it into a practical objective, then we selectively apply it to positive samples. Integrated into the R1-zero training framework, our method achieves a 3.5 percent average improvement across four mathematical reasoning benchmarks, while generating more diverse and robust solutions.
Problem

Research questions and friction points this paper is trying to address.

Investigates impact of diversity in RL for LLM reasoning
Proposes diversity-aware policy optimization for LLM training
Improves reasoning performance and solution diversity
Innovation

Methods, ideas, or system contributions that make the work stand out.

Diversity-aware policy optimization for LLM reasoning
Token-level diversity as practical RL objective
Selective application to positive training samples
🔎 Similar Papers
No similar papers found.
Jian Yao
Jian Yao
Wuhan University
Computer VisionAI3DRoboticsSLAM
R
Ran Cheng
Department of Data Science and Artificial Intelligence, The Hong Kong Polytechnic University
Xingyu Wu
Xingyu Wu
Hong Kong Polytechnic University
Automated machine learningCausality-based machine learningLarge foundation modelAutoML
Jibin Wu
Jibin Wu
The Hong Kong Polytechnic University
Spiking Neural NetworkNeuromorphic ComputingSpeech ProcessingCognitive Modelling
K
Kay Chen Tan
Department of Data Science and Artificial Intelligence, The Hong Kong Polytechnic University