🤖 AI Summary
Large reasoning models (LRMs) suffer from redundant reasoning patterns, and existing compression methods often sacrifice accuracy. This paper proposes SIRI, a framework that dynamically balances exploration depth and inference efficiency via iterative reinforcement learning with periodic alternation between reasoning compression and expansion. Its core innovations include: (i) a dynamic rollout-length adjustment mechanism; (ii) length-constrained training during compression phases to enhance reasoning density; and (iii) long-horizon exploration during expansion phases to preserve capability. These jointly break the traditional trade-off between reasoning efficiency and task performance. On the AIME24 benchmark, SIRI-low achieves a 43.2% accuracy gain and 46.9% token reduction within three iterations; SIRI-high attains state-of-the-art accuracy while closely approaching the Pareto frontier of performance versus efficiency.
📝 Abstract
We introduce SIRI, Scaling Iterative Reinforcement Learning with Interleaved Compression, a simple yet effective RL approach for Large Reasoning Models (LRMs) that enables more efficient and accurate reasoning. Existing studies have observed repetitive thinking patterns in LRMs, and attempts to reduce them often come at the cost of performance. In this paper, we show that this trade-off can be overcome through a training regime that iteratively alternates between compressing and expanding the reasoning budget, by dynamically adjusting the maximum rollout length during training. The compression phase cuts the rollout length, forcing the model to make precise and valuable decisions within a limited context, which effectively reduces redundant tokens and increases reasoning density. The expansion phase then relaxes the length limit, providing space for the model to explore and plan in long-horizon settings. Remarkably, we find that after each compression-expansion cycle, the model's performance improves even as its output length decreases, steadily pushing it closer to the Pareto frontier in the performance-efficiency trade-off. Training on DeepSeek-R1-Distill-Qwen-1.5B, SIRI-low improves performance on AIME24 by 43.2% while reducing token usage by 46.9% after three iterations, and SIRI-high achieves the highest accuracy compared to all other methods (Figure 1). Our findings shed light on the potential of periodically oscillating the LRM's output truncation length during training to dynamically balance exploration and efficiency in reasoning, converging towards an optimal "sweet spot" between the two. Our models are publicly available.