Logit Arithmetic Elicits Long Reasoning Capabilities Without Training

📅 2025-07-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of eliciting inherent long-chain reasoning capabilities—such as backtracking and self-correction—from large reasoning models (LRMs) without additional training. To this end, we propose ThinkLogit: a decoding-time intervention that applies logit arithmetic to dynamically modulate the reasoning process of a large model (Qwen2.5-32B) using a compact guide model (R1-Distill-Qwen-1.5B). We further integrate direct preference optimization (DPO) and reasoning-path sampling to construct a transferable reasoning-skills guidance mechanism. ThinkLogit is the first method to unlock intrinsic long-horizon reasoning in LRMs purely at inference time, substantially reducing reliance on fine-tuning. On four mathematical reasoning benchmarks, ThinkLogit and ThinkLogit-DPO achieve 26% and 29% relative improvements in pass@1, respectively; subsequent cross-dataset transfer reinforcement yields an additional 13% gain.

Technology Category

Application Category

📝 Abstract
Large reasoning models (LRMs) can do complex reasoning via long chain-of-thought (CoT) involving cognitive strategies such as backtracking and self-correction. Recent studies suggest that some models inherently possess these long reasoning abilities, which may be unlocked via extra training. Our work first investigates whether we can elicit such behavior without any training. To this end, we propose a decoding-time approach, ThinkLogit, which utilizes logits arithmetic (Liu et al., 2024) to tune a target large LM for long reasoning using a substantially smaller model as guider. We then show that we can further boost performance by training the guider model with preference optimization over correct/incorrect reasoning pairs sampled from both the target and guider model -- a setup we refer to as ThinkLogit-DPO. Our experiments demonstrate that ThinkLogit and ThinkLogit-DPO achieve a relative improvement in pass@1 by 26% and 29%, respectively, over four mathematical datasets using the Qwen2.5-32B when guided by R1-Distill-Qwen-1.5B -- a model 21x smaller. Lastly, we show that ThinkLogit can transfer long reasoning skills acquired through reinforcement learning, improving pass@1 by 13% relative compared to the Qwen2.5-32B base model. Our work presents a computationally-efficient method to elicit long reasoning in large models with minimal or no additional training.
Problem

Research questions and friction points this paper is trying to address.

Elicit long reasoning in large models without training
Improve reasoning via small model-guided logit arithmetic
Transfer learned reasoning skills to boost performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Logit arithmetic enables long reasoning without training
Smaller model guides large model via ThinkLogit
Preference optimization boosts performance further
🔎 Similar Papers
No similar papers found.