🤖 AI Summary
To address three key bottlenecks in large language models (LLMs) for long-context reasoning—quadratic computational complexity, hard context-length limits, and sharp performance degradation beyond the pretraining context window—this paper proposes an iterative reasoning paradigm. It alternates short-segment inference with dynamic generation of intermediate summaries, enabling unbounded reasoning depth. A novel zigzag memory pattern is introduced to decouple reasoning depth from computational cost without modifying model architecture. The method comprises: (1) iterative reasoning chain design, (2) lightweight context compression, (3) dynamic progress summarization, and (4) reconstruction of long-context benchmarks (e.g., extending OpenR1-Math to 333K samples). Evaluated on Qwen2.5-Math-7B, it achieves 3–13% absolute accuracy gains on MATH500, AIME24, and GPQA_diamond, with significantly reduced computational overhead. Empirical validation confirms cross-architecture generalizability.
📝 Abstract
Advanced reasoning in large language models has achieved remarkable performance on challenging tasks, but the prevailing long-context reasoning paradigm faces critical limitations: quadratic computational scaling with sequence length, reasoning constrained by maximum context boundaries, and performance degradation beyond pre-training context windows. Existing approaches primarily compress reasoning chains without addressing the fundamental scaling problem. To overcome these challenges, we introduce InftyThink, a paradigm that transforms monolithic reasoning into an iterative process with intermediate summarization. By interleaving short reasoning segments with concise progress summaries, our approach enables unbounded reasoning depth while maintaining bounded computational costs. This creates a characteristic sawtooth memory pattern that significantly reduces computational complexity compared to traditional approaches. Furthermore, we develop a methodology for reconstructing long-context reasoning datasets into our iterative format, transforming OpenR1-Math into 333K training instances. Experiments across multiple model architectures demonstrate that our approach reduces computational costs while improving performance, with Qwen2.5-Math-7B showing 3-13% improvements across MATH500, AIME24, and GPQA_diamond benchmarks. Our work challenges the assumed trade-off between reasoning depth and computational efficiency, providing a more scalable approach to complex reasoning without architectural modifications.