🤖 AI Summary
This work addresses the challenges of quadratic computational overhead, context length constraints, and intermediate information loss in long-chain reasoning with large language models. To overcome these limitations, the authors propose an end-to-end trajectory-level reinforcement learning framework that, for the first time, applies reinforcement learning to optimize the entire iterative reasoning process. The model autonomously decides when and what to summarize while dynamically controlling whether to continue reasoning, enabling strategic summarization and continuation over an unbounded horizon. A two-stage training strategy is employed: initial supervised warm-up followed by trajectory-level reinforcement learning based on the DeepSeek-R1-Distill-Qwen-1.5B architecture. Evaluated on AIME24, the method achieves a 21% accuracy improvement over existing approaches, while simultaneously reducing inference latency, accelerating training, and demonstrating superior out-of-distribution generalization.
📝 Abstract
Large reasoning models achieve strong performance by scaling inference-time chain-of-thought, but this paradigm suffers from quadratic cost, context length limits, and degraded reasoning due to lost-in-the-middle effects. Iterative reasoning mitigates these issues by periodically summarizing intermediate thoughts, yet existing methods rely on supervised learning or fixed heuristics and fail to optimize when to summarize, what to preserve, and how to resume reasoning. We propose InftyThink+, an end-to-end reinforcement learning framework that optimizes the entire iterative reasoning trajectory, building on model-controlled iteration boundaries and explicit summarization. InftyThink+ adopts a two-stage training scheme with supervised cold-start followed by trajectory-level reinforcement learning, enabling the model to learn strategic summarization and continuation decisions. Experiments on DeepSeek-R1-Distill-Qwen-1.5B show that InftyThink+ improves accuracy by 21% on AIME24 and outperforms conventional long chain-of-thought reinforcement learning by a clear margin, while also generalizing better to out-of-distribution benchmarks. Moreover, InftyThink+ significantly reduces inference latency and accelerates reinforcement learning training, demonstrating improved reasoning efficiency alongside stronger performance.