ROM: Real-time Overthinking Mitigation via Streaming Detection and Intervention

📅 2026-03-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the problem of overthinking in large reasoning models, which often leads to increased latency, higher computational costs, and answer drift during extended chain-of-thought generation. The authors propose the first streaming overthinking detection and intervention framework that employs a lightweight detection head to monitor high-level hidden states of a frozen large language model in real time, enabling early termination of redundant reasoning steps. The method incorporates token-level supervision signals derived from answer correctness boundaries and a data augmentation strategy to effectively mitigate distillation bias. Evaluated across seven benchmarks, the approach achieves an accuracy of 93.51%, reduces average response length to 1,159 tokens—47.2% shorter than baseline methods—and improves inference efficiency by 121%.

Technology Category

Application Category

📝 Abstract
Large Reasoning Models (LRMs) achieve strong accuracy on challenging tasks by generating long Chain-of-Thought traces, but suffer from overthinking. Even after reaching the correct answer, they continue generating redundant reasoning steps. This behavior increases latency and compute cost and can also lead to answer drift. Existing mitigation methods either require training-heavy backbone modification or rely on hand-crafted heuristics that do not truly capture overthinking patterns. We propose ROM, the first method that formulates overthinking mitigation as a streaming prediction-and-control problem. ROM attaches a lightweight detection head to the late-layer hidden states of a frozen large language model backbone. It monitors tokens in real time and triggers an early transition to the final answer once overthinking is detected. We also introduce token-level supervision based on solution correctness boundaries and a data augmentation strategy that reduces distilled-data bias. Across seven benchmarks, ROM achieves the highest accuracy (93.51%), the shortest responses (1,159 tokens), and the best response efficiency. Compared with the vanilla baseline, it reduces response length by 47.2% and improves efficiency by 121%. These results show that streaming detection is a promising approach to real-time overthinking mitigation.
Problem

Research questions and friction points this paper is trying to address.

overthinking
Large Reasoning Models
latency
compute cost
answer drift
Innovation

Methods, ideas, or system contributions that make the work stand out.

streaming detection
overthinking mitigation
lightweight detection head
token-level supervision
response efficiency
🔎 Similar Papers
No similar papers found.