🤖 AI Summary
This work addresses the suboptimal performance of large reasoning models, which often produce redundant outputs on simple tasks and terminate prematurely on complex ones due to overconfidence. To tackle this issue, the authors propose Difficulty-Differentiated Policy Optimization (DDPO), the first algorithm that jointly models task difficulty and output length. DDPO compresses response length for easy tasks while expanding the exploration space for hard ones, and it reallocates the output length distribution based on theoretically derived average lengths per difficulty level. By integrating a difficulty-aware policy optimization mechanism with a theory-driven length allocation criterion, DDPO reduces average answer length by 12% relative to GRPO while simultaneously improving accuracy by 1.85% across multiple in-domain and out-of-domain benchmarks, achieving a significantly better trade-off between reasoning efficiency and accuracy.
📝 Abstract
Large Reasoning Models (LRMs) have shown exceptional reasoning capabilities, but they also suffer from the issue of overthinking, often generating excessively long and redundant answers.
For problems that exceed the model's capabilities, LRMs tend to exhibit the overconfidence phenomenon, generating overly short but incorrect answers, which may contribute to suboptimal performance.
To address these issues, we propose Difficulty-Differentiated Policy Optimization (DDPO), an efficient reinforcement learning algorithm that optimizes simple and complex tasks separately based on the overconfidence phenomenon.
Specifically, it reduces the output length for simple tasks without compromising accuracy, while for complex tasks, it expands the exploration space to improve performance. We further derive the theoretical conditions for maximizing expected accuracy, which require the length distribution to closely approximate the optimal length and be as concentrated as possible. Based on these conditions, we propose using the difficulty-level average as a well-founded reference for length optimization.
Extensive experiments on both in-domain and out-of-domain benchmarks validate the superiority and effectiveness of DDPO. Compared to GRPO, DDPO reduces the average answer length by 12% while improving accuracy by 1.85% across multiple benchmarks, achieving a better trade-off between accuracy and length. The code is available at https://github.com/Yinan-Xia/DDPO.