Encode, Think, Decode: Scaling test-time reasoning with recursive latent thoughts

📅 2025-10-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
How can the reasoning capabilities of large language models (LLMs) be enhanced without increasing parameters, training data, or modifying the model architecture? This paper proposes the Encode-Think-Decode framework, which enables test-time adaptive, lightweight recursive latent reasoning within intermediate hidden layers—extending computational depth rather than model capacity. Its core innovation lies in reusing critical intermediate layers under a fixed architecture, integrating mid-layer fine-tuning with iterative hidden-state refinement; this incurs only minimal additional forward passes while deepening reasoning. Evaluated on the OLMo-2 1B Base model, the method achieves substantial gains across 17 reasoning benchmarks: +28.4% relative accuracy on GSM8K and +36% on MATH. Results demonstrate its efficacy, scalability, and cross-task generalizability—all without any parameter growth.

Technology Category

Application Category

📝 Abstract
Most efforts to improve the reasoning capabilities of large language models (LLMs) involve either scaling the number of parameters and the size of training data, or scaling inference computation by letting models generate complex chains of thought. Motivated by interpretability studies showing that the crucial computation required for reasoning tasks is concentrated in a limited range of layers, we introduce Encode-Think-Decode (ETD), a method that enhances the reasoning capabilities of a base model by training it to iterate over a small subset of reasoning-relevant layers during the mid-training stage. ETD amplifies latent reasoning while preserving the original architecture, parameter count, hyperparameters, and training data composition. When iterating on the selected layers at inference time, ETD models yield substantial gains on 17 reasoning benchmarks, including +28.4% relative accuracy improvement on GSM8K and +36% on MATH with the OLMo-2 1B Base model. We also explore an adaptive depth strategy that adjusts the computation per input token. Our results show that recursive latent reasoning offers a simple and effective path to stronger LLM reasoning.
Problem

Research questions and friction points this paper is trying to address.

Enhancing reasoning capabilities without scaling model parameters or data
Iterating over reasoning-relevant layers to amplify latent reasoning
Improving performance on mathematical and logical reasoning benchmarks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Iterates over reasoning-relevant layers mid-training
Preserves original model architecture and parameters
Uses adaptive depth strategy per input token
🔎 Similar Papers
No similar papers found.