🤖 AI Summary
This work addresses the high computational cost of Chain-of-Thought (CoT) reasoning in large language models, which existing compression methods struggle to mitigate without compromising logical fidelity under aggressive compression ratios. To overcome this challenge, the authors propose Extra-CoT, a novel framework that employs a semantics-preserving compressor to generate high-fidelity supervision data, combined with mixed-ratio supervised fine-tuning and a new Constrained Hierarchical Ratio Policy Optimization (CHRPO) algorithm. This approach enables highly efficient and accurate reasoning even at extreme compression levels. Evaluated on three mathematical reasoning benchmarks—including MATH-500—Extra-CoT significantly outperforms state-of-the-art methods, achieving over 73% token compression on Qwen3-1.7B while simultaneously improving accuracy by 0.6%.
📝 Abstract
Chain-of-Thought (CoT) reasoning successfully enhances the reasoning capabilities of Large Language Models (LLMs), yet it incurs substantial computational overhead for inference. Existing CoT compression methods often suffer from a critical loss of logical fidelity at high compression ratios, resulting in significant performance degradation. To achieve high-fidelity, fast reasoning, we propose a novel EXTreme-RAtio Chain-of-Thought Compression framework, termed Extra-CoT, which aggressively reduces the token budget while preserving answer accuracy. To generate reliable, high-fidelity supervision, we first train a dedicated semantically-preserved compressor on mathematical CoT data with fine-grained annotations. An LLM is then fine-tuned on these compressed pairs via a mixed-ratio supervised fine-tuning (SFT), teaching it to follow a spectrum of compression budgets and providing a stable initialization for reinforcement learning (RL). We further propose Constrained and Hierarchical Ratio Policy Optimization (CHRPO) to explicitly incentivize question-solving ability under lower budgets by a hierarchical reward. Experiments on three mathematical reasoning benchmarks show the superiority of Extra-CoT. For example, on MATH-500 using Qwen3-1.7B, Extra-CoT achieves over 73\% token reduction with an accuracy improvement of 0.6\%, significantly outperforming state-of-the-art (SOTA) methods.