🤖 AI Summary
Large reasoning models (LRMs) suffer from excessive reasoning—“overthinking”—leading to high computational cost and low inference efficiency. Existing knowledge distillation methods are offline and time-consuming, hindering online deployment; online reinforcement learning (RL) approaches relying solely on length-based rewards often degrade reflective capability. To address this, we propose a reflection-aware online RL framework: (i) a lightweight reflection model enables efficient real-time training; (ii) a novel reflection reward function jointly optimizes for conciseness and explicit reflection quality; and (iii) parallel sampling with sequential revision supports dynamic trade-offs—preserving reflection for hard problems while reducing redundancy for easy ones. Experiments demonstrate a 35% reduction in inference cost with zero performance degradation on downstream tasks and unchanged reflection frequency on challenging instances. Our code is publicly available.
📝 Abstract
Large Reasoning Models (LRMs) demonstrate strong performance in complex tasks but often face the challenge of overthinking, leading to substantially high inference costs. Existing approaches synthesize shorter reasoning responses for LRMs to learn, but are inefficient for online usage due to the time-consuming data generation and filtering processes. Meanwhile, online reinforcement learning mainly adopts a length reward to encourage short reasoning responses, but tends to lose the reflection ability and harm the performance. To address these issues, we propose REA-RL, which introduces a small reflection model for efficient scaling in online training, offering both parallel sampling and sequential revision. Besides, a reflection reward is designed to further prevent LRMs from favoring short yet non-reflective responses. Experiments show that both methods maintain or enhance performance while significantly improving inference efficiency. Their combination achieves a good balance between performance and efficiency, reducing inference costs by 35% without compromising performance. Further analysis demonstrates that our methods are effective by maintaining reflection frequency for hard problems while appropriately reducing it for simpler ones without losing reflection ability. Codes are available at https://github.com/hexuandeng/REA-RL.