🤖 AI Summary
Large reasoning models (LRMs) suffer from “overthinking”: they generate excessive, redundant reasoning steps, yielding marginal performance gains while increasing computational overhead and degrading safety alignment and generalization. This paper introduces ThoughtMani—a fine-tuning-free, low-overhead, highly general external reasoning guidance framework. Its core innovation is the first identification and exploitation of small-model-generated external chains-of-thought (CoT) to *steer* LRM internal reasoning length. Through structured injection (via `<think>…</think>` delimiters), cross-model collaborative reasoning, and integration of a lightweight CoT generator, ThoughtMani achieves effective reasoning step compression. Evaluated on QwQ-32B, it preserves original task performance while reducing output tokens by 30%, incurring negligible computational overhead, and improving average safety alignment by 10%.
📝 Abstract
Recent advancements in large reasoning models (LRMs) have demonstrated the effectiveness of scaling test-time computation to enhance reasoning capabilities in multiple tasks. However, LRMs typically suffer from"overthinking"problems, where models generate significantly redundant reasoning steps while bringing limited performance gains. Existing work relies on fine-tuning to mitigate overthinking, which requires additional data, unconventional training setups, risky safety misalignment, and poor generalization. Through empirical analysis, we reveal an important characteristic of LRM behaviors that placing external CoTs generated by smaller models between the thinking token ($ exttt{}$ and $ exttt{)}$ can effectively manipulate the model to generate fewer thoughts. Building on these insights, we propose a simple yet efficient pipeline, ThoughtMani, to enable LRMs to bypass unnecessary intermediate steps and reduce computational costs significantly. We conduct extensive experiments to validate the utility and efficiency of ThoughtMani. For instance, when applied to QwQ-32B on the LiveBench/Code dataset, ThoughtMani keeps the original performance and reduces output token counts by approximately 30%, with little overhead from the CoT generator. Furthermore, we find that ThoughtMani enhances safety alignment by an average of 10%. Since model vendors typically serve models of different sizes simultaneously, ThoughtMani provides an effective way to construct more efficient and accessible LRMs for real-world applications.