🤖 AI Summary
To address the excessive output token count and high inference overhead in large language models (LLMs) caused by explicit chain-of-thought reasoning, this paper proposes Habitual Reasoning Distillation (HRD). HRD leverages multi-teacher collaborative supervision and Dual-Criterion Rejection Sampling (DCRS) to construct high-quality, unsupervised distillation data, internalizing complex reasoning into intuitive, habitual model behavior—enabling a “token-free thinking” lightweight inference paradigm. Crucially, HRD eliminates intermediate reasoning tokens during inference while preserving or even improving task accuracy (up to +13.6%). It significantly reduces output token count, inference latency, and computational cost, making it suitable for edge devices and high-concurrency deployments. The core contribution lies in pioneering the integration of cognitively inspired habit formation mechanisms into knowledge distillation, establishing an efficient, implicit-reasoning inference paradigm without explicit reasoning traces.
📝 Abstract
Large Language Models (LLMs) have made significant strides in problem-solving by incorporating reasoning processes. However, this enhanced reasoning capability results in an increased number of output tokens during inference, leading to higher computational costs. To address this challenge, we propose TwT (Thinking without Tokens), a method that reduces inference-time costs through habitual reasoning distillation with multi-teachers' guidance, while maintaining high performance. Our approach introduces a Habitual Reasoning Distillation method, which internalizes explicit reasoning into the model's habitual behavior through a Teacher-Guided compression strategy inspired by human cognition. Additionally, we propose Dual-Criteria Rejection Sampling (DCRS), a technique that generates a high-quality and diverse distillation dataset using multiple teacher models, making our method suitable for unsupervised scenarios. Experimental results demonstrate that TwT effectively reduces inference costs while preserving superior performance, achieving up to a 13.6% improvement in accuracy with fewer output tokens compared to other distillation methods, offering a highly practical solution for efficient LLM deployment.