🤖 AI Summary
This work addresses the inefficiency and performance degradation of large reasoning models in multi-step reasoning, often caused by overthinking or deviation from the correct path, compounded by the absence of dynamic intervention mechanisms. The authors propose the Think-with-Me paradigm, which introduces external intervention during inference for the first time: transitional connectives serve as decision points where human or LLM agents provide real-time feedback based on multi-criteria assessments of reasonableness and completeness, dynamically determining whether to continue or terminate reasoning. Trained via Group Relative Policy Optimization (GRPO), the model effectively adapts to interactive reasoning. On the AIME24 benchmark, the method achieves a 7.19% accuracy gain over QwQ-32B using only an 8K context window, reduces average reasoning length by 81%, and demonstrates strong performance in safety-sensitive and creative tasks.
📝 Abstract
Large Reasoning Models (LRMs) excel at multi-step reasoning but often suffer from inefficient reasoning processes like overthinking and overshoot, where excessive or misdirected reasoning increases computational cost and degrades performance. Existing efficient reasoning methods operate in a closed-loop manner, lacking mechanisms for external intervention to guide the reasoning process. To address this, we propose Think-with-Me, a novel test-time interactive reasoning paradigm that introduces external feedback intervention into the reasoning process. Our key insights are that transitional conjunctions serve as natural points for intervention, signaling phases of self-validation or exploration and using transitional words appropriately to prolong the reasoning enhances performance, while excessive use affects performance. Building on these insights, Think-with-Me pauses reasoning at these points for external feedback, adaptively extending or terminating reasoning to reduce redundancy while preserving accuracy. The feedback is generated via a multi-criteria evaluation (rationality and completeness) and comes from either human or LLM proxies. We train the target model using Group Relative Policy Optimization (GRPO) to adapt to this interactive mode. Experiments show that Think-with-Me achieves a superior balance between accuracy and reasoning length under limited context windows. On AIME24, Think-with-Me outperforms QwQ-32B by 7.19% in accuracy while reducing average reasoning length by 81% under an 8K window. The paradigm also benefits security and creative tasks.