Base Models Know How to Reason, Thinking Models Learn When

📅 2025-10-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Why do reasoning-oriented language models (e.g., DeepSeek-R1) substantially outperform their base counterparts? Does their enhanced capability stem from newly acquired reasoning skills, or from more effective activation of pre-existing reasoning mechanisms in the base model? Method: We propose an unsupervised, bottom-up framework for discovering reasoning behaviors, applying token-level causal interventions on GSM8K and MATH500—without weight updates—dynamically modulating only 12% of critical tokens to steer reasoning trajectories. Contribution: We provide the first empirical evidence that strong latent reasoning capacity is already encoded in base models. The key innovation of reasoning models lies not in *learning reasoning from scratch*, but in *learning when and how to invoke* these inherent mechanisms. This “activation control” mechanism alone bridges 91% of the performance gap between base and reasoning models, prompting a paradigm shift: pretraining encodes reasoning primitives, while post-training refines their strategic invocation.

Technology Category

Application Category

📝 Abstract
Why do thinking language models like DeepSeek R1 outperform their base counterparts? Despite consistent performance gains, it remains unclear to what extent thinking models learn entirely new reasoning capabilities or repurpose pre-existing base model ones. In this work, we propose a hybrid model where we activate reasoning mechanisms in base models at the right time to elicit thinking-model-level reasoning chains, implying that thinking models exploit already existing capabilities. To ground our analysis, we introduce an unsupervised, bottom-up approach for uncovering human-interpretable reasoning behaviors in thinking models. This approach provides an unbiased method to discover reasoning behaviors without imposing manual or LLM-derived assumptions. Across three base and four thinking models, using GSM8K and MATH500, our hybrid model recovers up to 91% of the performance gap to thinking models without any weight updates while steering only 12% of tokens. Concretely, our empirical setup provides a simple, causal way to test the effectiveness of existing reasoning mechanisms in base models by invoking them directly and measuring the resulting task performance. More broadly, these results reframe our understanding of how thinking models are trained: pre-training is when models acquire most of their reasoning mechanisms, and post-training teaches efficient deployment of these mechanisms at the right time, enabling efficient use of their inference-time compute.
Problem

Research questions and friction points this paper is trying to address.

Investigating whether thinking models learn new reasoning or repurpose existing capabilities
Developing hybrid model to activate base model reasoning at optimal times
Introducing unsupervised approach to uncover interpretable reasoning behaviors
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hybrid model activates base reasoning mechanisms
Unsupervised bottom-up approach discovers reasoning behaviors
Post-training teaches efficient deployment of existing mechanisms
🔎 Similar Papers
No similar papers found.