Thinking in Latents: Adaptive Anchor Refinement for Implicit Reasoning in LLMs

📅 2026-03-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the inefficiency of traditional chain-of-thought (CoT) reasoning and the rigidity of existing latent-space methods that rely on fixed inference steps, which struggle to balance accuracy and computational efficiency. The authors propose AdaAnchor, a novel framework that introduces an adaptive stopping mechanism into latent-space reasoning for the first time. By embedding an implicitly parameterized anchor vector into the input—iteratively refined during inference—the model dynamically determines the required number of reasoning steps based on the convergence behavior of this anchor, automatically adapting to problem difficulty under a unified maximum-step constraint. Evaluated on three mathematical reasoning benchmarks, AdaAnchor achieves up to a 5% accuracy gain over fixed-step baselines, reduces latent-space computation steps by 48–60% on average, and cuts output token usage by 92–93% compared to standard CoT.

Technology Category

Application Category

📝 Abstract
Token-level Chain-of-Thought (CoT) prompting has become a standard way to elicit multi-step reasoning in large language models (LLMs), especially for mathematical word problems. However, generating long intermediate traces increases output length and inference cost, and can be inefficient when the model could arrive at the correct answer without extensive verbalization. This has motivated latent-space reasoning approaches that shift computation into hidden representations and only emit a final answer. Yet, many latent reasoning methods depend on a fixed number of latent refinement steps at inference, adding another hyperparameter that must be tuned across models and datasets to balance accuracy and efficiency. We introduce AdaAnchor, a latent reasoning framework that performs silent iterative computation by refining a set of latent anchor vectors attached to the input. AdaAnchor further incorporates an adaptive halting mechanism that monitors anchor stability across iterations and terminates refinement once the anchor dynamics converge, allocating fewer steps to easier instances while reserving additional refinement steps for harder ones under a shared maximum-step budget. Our empirical evaluation across three mathematical word-problem benchmarks shows that AdaAnchor with adaptive halting yields accuracy gains of up to 5% over fixed-step latent refinement while reducing average latent refinement steps by 48-60% under the same maximum-step budget. Compared to standard reasoning baselines, AdaAnchor achieves large reductions in generated tokens (92-93%) by moving computation into silent latent refinement, offering a different accuracy-efficiency trade-off with substantially lower output-token usage.
Problem

Research questions and friction points this paper is trying to address.

latent reasoning
adaptive halting
LLMs
reasoning efficiency
mathematical word problems
Innovation

Methods, ideas, or system contributions that make the work stand out.

latent reasoning
adaptive halting
anchor refinement
silent computation
large language models
🔎 Similar Papers
2024-02-26Annual Meeting of the Association for Computational LinguisticsCitations: 97
D
Disha Sheshanarayana
Manipal University Jaipur
R
Rajat Subhra Pal
TCS Research
M
Manjira Sinha
TCS Research
Tirthankar Dasgupta
Tirthankar Dasgupta
Senior Scientist, TCS Research
Natural Language ProcessingComputational Psycholinguisticsand Human Computer Interaction