CoThink: Token-Efficient Reasoning via Instruct Models Guiding Reasoning Models

📅 2025-05-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) frequently exhibit “overthinking” during test-time scaling—employing unnecessarily deep, redundant reasoning even for simple tasks, leading to excessive token consumption and increased latency. Method: We propose a dual-model collaborative inference framework: an instruction model generates high-level solution outlines, while a dedicated reasoning model executes precise, efficient computation guided by those outlines. We formally define “reasoning efficiency” and uncover its underlying scaling law; by decoupling instruction generation from reasoning execution, our framework enables dynamic, task-adaptive depth control—replacing fixed, conservative depth policies. Contribution/Results: The framework is fully compatible with existing models (e.g., DAPO, DeepSeek-R1, QwQ) without requiring retraining or architectural modifications. On GSM8K, MATH500, and AIME24, it reduces total generated tokens by 22.3% on average, with pass@1 accuracy dropping by ≤0.42%.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) benefit from increased test-time compute, a phenomenon known as test-time scaling. However, reasoning-optimized models often overthink even simple problems, producing excessively verbose outputs and leading to low token efficiency. By comparing these models with equally sized instruct models, we identify two key causes of this verbosity: (1) reinforcement learning reduces the information density of forward reasoning, and (2) backward chain-of thought training encourages redundant and often unnecessary verification steps. Since LLMs cannot assess the difficulty of a given problem, they tend to apply the same cautious reasoning strategy across all tasks, resulting in inefficient overthinking. To address this, we propose CoThink, an embarrassingly simple pipeline: an instruct model first drafts a high-level solution outline; a reasoning model then works out the solution. We observe that CoThink enables dynamic adjustment of reasoning depth based on input difficulty. Evaluated with three reasoning models DAPO, DeepSeek-R1, and QwQ on three datasets GSM8K, MATH500, and AIME24, CoThink reduces total token generation by 22.3% while maintaining pass@1 accuracy within a 0.42% margin on average. With reference to the instruct model, we formally define reasoning efficiency and observe a potential reasoning efficiency scaling law in LLMs.
Problem

Research questions and friction points this paper is trying to address.

Reduce token inefficiency in reasoning-optimized LLMs
Address overthinking in models for simple problems
Dynamic adjustment of reasoning depth by input difficulty
Innovation

Methods, ideas, or system contributions that make the work stand out.

Instruct model drafts high-level solution outline
Reasoning model dynamically adjusts solution depth
Reduces token generation by 22.3% efficiently
🔎 Similar Papers
No similar papers found.
S
Siqi Fan
University of Electronic Science and Technology of China
Peng Han
Peng Han
Professor, Department of Computer Science, UESTC
drug discoveryspatial temporaldata mining
Shuo Shang
Shuo Shang
Computer Science & AI Scientist
Spatial dataSpatiotemporal databases
Y
Yequan Wang
Beijing Academy of Artificial Intelligence, China
A
Aixin Sun
Nanyang Technological University, Singapore