🤖 AI Summary
To address the high token overhead, inference latency, and memory consumption associated with long chain-of-thought (CoT) generation by large language models (LLMs), this paper proposes TokenSqueeze—a self-supervised compression method that requires no human-annotated short answers. Leveraging only model-generated data, TokenSqueeze compresses reasoning paths while preserving logical integrity via adaptive inference-depth selection and distribution-aligned linguistic refinement. Its core techniques include self-generated sample filtering, depth–task adaptive matching, and conciseness optimization guided by output distribution alignment. Evaluated on the MATH500 benchmark, DeepSeek-R1-Distill-Qwen-7B achieves an average 50% token reduction with zero accuracy loss, significantly improving both inference efficiency and energy efficiency for complex reasoning tasks.
📝 Abstract
Emerging reasoning LLMs such as OpenAI-o1 and DeepSeek-R1 have achieved strong performance on complex reasoning tasks by generating long chain-of-thought (CoT) traces. However, these long CoTs result in increased token usage, leading to higher inference latency and memory consumption. As a result, balancing accuracy and reasoning efficiency has become essential for deploying reasoning LLMs in practical applications. Existing long-to-short (Long2Short) methods aim to reduce inference length but often sacrifice accuracy, revealing a need for an approach that maintains performance while lowering token costs. To address this efficiency-accuracy tradeoff, we propose TokenSqueeze, a novel Long2Short method that condenses reasoning paths while preserving performance and relying exclusively on self-generated data. First, to prevent performance degradation caused by excessive compression of reasoning depth, we propose to select self-generated samples whose reasoning depth is adaptively matched to the complexity of the problem. To further optimize the linguistic expression without altering the underlying reasoning paths, we introduce a distribution-aligned linguistic refinement method that enhances the clarity and conciseness of the reasoning path while preserving its logical integrity. Comprehensive experimental results demonstrate the effectiveness of TokenSqueeze in reducing token usage while maintaining accuracy. Notably, DeepSeek-R1-Distill-Qwen-7B fine-tuned using our proposed method achieved a 50% average token reduction while preserving accuracy on the MATH500 benchmark. TokenSqueeze exclusively utilizes the model's self-generated data, enabling efficient and high-fidelity reasoning without relying on manually curated short-answer datasets across diverse applications. Our code is available at https://github.com/zhangyx1122/TokenSqueeze.