Circular Reasoning: Understanding Self-Reinforcing Loops in Large Reasoning Models

πŸ“… 2026-01-09
πŸ›οΈ arXiv.org
πŸ“ˆ Citations: 1
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This study addresses the tendency of large reasoning models to fall into self-reinforcing loops during long-chain reasoning, leading to computational waste and reasoning failure. The work systematically identifies this phenomenon as stemming from reasoning blockage sustained by a V-shaped self-reinforcing attention mechanism. Introducing the concept of β€œstate collapse,” the authors reveal a critical boundary where semantic repetition precedes textual repetition. To characterize this issue, they construct LoopBench, a benchmark dataset capturing both numerical and linguistic looping patterns, and propose an early detection method based on the CUSUM (cumulative sum) algorithm to predict loop precursors. Extensive experiments across multiple mainstream large language models demonstrate the effectiveness of the approach, significantly enhancing the stability and efficiency of long-chain reasoning.

Technology Category

Application Category

πŸ“ Abstract
Despite the success of test-time scaling, Large Reasoning Models (LRMs) frequently encounter repetitive loops that lead to computational waste and inference failure. In this paper, we identify a distinct failure mode termed Circular Reasoning. Unlike traditional model degeneration, this phenomenon manifests as a self-reinforcing trap where generated content acts as a logical premise for its own recurrence, compelling the reiteration of preceding text. To systematically analyze this phenomenon, we introduce LoopBench, a dataset designed to capture two distinct loop typologies: numerical loops and statement loops. Mechanistically, we characterize circular reasoning as a state collapse exhibiting distinct boundaries, where semantic repetition precedes textual repetition. We reveal that reasoning impasses trigger the loop onset, which subsequently persists as an inescapable cycle driven by a self-reinforcing V-shaped attention mechanism. Guided by these findings, we employ the Cumulative Sum (CUSUM) algorithm to capture these precursors for early loop prediction. Experiments across diverse LRMs validate its accuracy and elucidate the stability of long-chain reasoning.
Problem

Research questions and friction points this paper is trying to address.

Circular Reasoning
Large Reasoning Models
Self-reinforcing Loops
Reasoning Impasse
Repetitive Loops
Innovation

Methods, ideas, or system contributions that make the work stand out.

Circular Reasoning
LoopBench
Self-reinforcing Attention
CUSUM Algorithm
State Collapse
πŸ”Ž Similar Papers
No similar papers found.