Beyond Model Scaling: Test-Time Intervention for Efficient Deep Reasoning

📅 2026-01-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the inefficiency and performance degradation of large reasoning models in multi-step reasoning, often caused by overthinking or deviation from the correct path, compounded by the absence of dynamic intervention mechanisms. The authors propose the Think-with-Me paradigm, which introduces external intervention during inference for the first time: transitional connectives serve as decision points where human or LLM agents provide real-time feedback based on multi-criteria assessments of reasonableness and completeness, dynamically determining whether to continue or terminate reasoning. Trained via Group Relative Policy Optimization (GRPO), the model effectively adapts to interactive reasoning. On the AIME24 benchmark, the method achieves a 7.19% accuracy gain over QwQ-32B using only an 8K context window, reduces average reasoning length by 81%, and demonstrates strong performance in safety-sensitive and creative tasks.

Technology Category

Application Category

📝 Abstract
Large Reasoning Models (LRMs) excel at multi-step reasoning but often suffer from inefficient reasoning processes like overthinking and overshoot, where excessive or misdirected reasoning increases computational cost and degrades performance. Existing efficient reasoning methods operate in a closed-loop manner, lacking mechanisms for external intervention to guide the reasoning process. To address this, we propose Think-with-Me, a novel test-time interactive reasoning paradigm that introduces external feedback intervention into the reasoning process. Our key insights are that transitional conjunctions serve as natural points for intervention, signaling phases of self-validation or exploration and using transitional words appropriately to prolong the reasoning enhances performance, while excessive use affects performance. Building on these insights, Think-with-Me pauses reasoning at these points for external feedback, adaptively extending or terminating reasoning to reduce redundancy while preserving accuracy. The feedback is generated via a multi-criteria evaluation (rationality and completeness) and comes from either human or LLM proxies. We train the target model using Group Relative Policy Optimization (GRPO) to adapt to this interactive mode. Experiments show that Think-with-Me achieves a superior balance between accuracy and reasoning length under limited context windows. On AIME24, Think-with-Me outperforms QwQ-32B by 7.19% in accuracy while reducing average reasoning length by 81% under an 8K window. The paradigm also benefits security and creative tasks.
Problem

Research questions and friction points this paper is trying to address.

Large Reasoning Models
efficient reasoning
test-time intervention
overthinking
reasoning efficiency
Innovation

Methods, ideas, or system contributions that make the work stand out.

test-time intervention
interactive reasoning
transitional conjunctions
Group Relative Policy Optimization
reasoning efficiency
🔎 Similar Papers
No similar papers found.
Q
Qianyue Wang
South China University of Technology, Pazhou Laboratory
Jinwu Hu
Jinwu Hu
South China University of Technology; Pazhou Lab
Large Language ModelsComputer VisionReinforcement Learning
Y
Yufeng Wang
South China University of Technology, Peng Cheng Laboratory
H
Huanxiang Lin
South China University of Technology
B
Bolin Chen
South China University of Technology
Zhiquan Wen
Zhiquan Wen
South China University of Technology
Yaofo Chen
Yaofo Chen
South China University of Technology
Large Language ModelsAutoMLModel AdaptationRobustness
Mingkui Tan
Mingkui Tan
South China University of Technology
Machine LearningLarge-scale Optimization