Let LLMs Break Free from Overthinking via Self-Braking Tuning

📅 2025-05-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large reasoning models (LRMs) suffer from redundant inference and high computational overhead due to excessively long chains of thought; existing mitigation strategies largely rely on external interventions to curb “overthinking.” This paper proposes Self-Braking Tuning (SBT), the first framework enabling endogenous, external-control-free regulation of reasoning length. SBT introduces a ground-truth–based quantitative metric for overthinking, designs a self-supervised scheme for reasoning trajectory annotation and dynamic length-aware data construction, and incorporates a brake-prompt fine-tuning mechanism that empowers models to autonomously identify and terminate unproductive reasoning steps. Evaluated on mathematical reasoning benchmarks—including AIME, AMC, MATH500, and GSM8K—SBT reduces inference token consumption by up to 60% while maintaining accuracy comparable to unconstrained baselines.

Technology Category

Application Category

📝 Abstract
Large reasoning models (LRMs), such as OpenAI o1 and DeepSeek-R1, have significantly enhanced their reasoning capabilities by generating longer chains of thought, demonstrating outstanding performance across a variety of tasks. However, this performance gain comes at the cost of a substantial increase in redundant reasoning during the generation process, leading to high computational overhead and exacerbating the issue of overthinking. Although numerous existing approaches aim to address the problem of overthinking, they often rely on external interventions. In this paper, we propose a novel framework, Self-Braking Tuning (SBT), which tackles overthinking from the perspective of allowing the model to regulate its own reasoning process, thus eliminating the reliance on external control mechanisms. We construct a set of overthinking identification metrics based on standard answers and design a systematic method to detect redundant reasoning. This method accurately identifies unnecessary steps within the reasoning trajectory and generates training signals for learning self-regulation behaviors. Building on this foundation, we develop a complete strategy for constructing data with adaptive reasoning lengths and introduce an innovative braking prompt mechanism that enables the model to naturally learn when to terminate reasoning at an appropriate point. Experiments across mathematical benchmarks (AIME, AMC, MATH500, GSM8K) demonstrate that our method reduces token consumption by up to 60% while maintaining comparable accuracy to unconstrained models.
Problem

Research questions and friction points this paper is trying to address.

Reducing redundant reasoning in large models to cut computational costs
Enabling self-regulation in models to avoid overthinking without external control
Maintaining accuracy while significantly decreasing token usage in reasoning tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Self-Braking Tuning for autonomous reasoning control
Overthinking metrics based on standard answers
Braking prompt for adaptive reasoning termination
🔎 Similar Papers
No similar papers found.