Implicit Reasoning in Large Language Models: A Comprehensive Survey

๐Ÿ“… 2025-09-02
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Existing research lacks a mechanistic, system-level analysis of implicit reasoning in large language models (LLMs). This paper introduces the first taxonomy of implicit reasoning centered on *execution paradigms*, departing from conventional representation-based analyses and instead characterizing reasoning strategies computationally. We categorize implicit reasoning into three types: latent optimization, signal-guided control, and layer-wise recurrent execution. Leveraging intermediate-layer activation analysis, dynamic signal modulation, intra-layer recursive modeling, and behavioral interpretability experiments, we integrate structural, behavioral, and representational evidence to uncover how reasoning โ€œsilently unfoldsโ€ within internal representations. Furthermore, we establish a systematic evaluation framework covering mainstream benchmarks and low-latency inference metrics. All code and resources are publicly released and actively maintained.

Technology Category

Application Category

๐Ÿ“ Abstract
Large Language Models (LLMs) have demonstrated strong generalization across a wide range of tasks. Reasoning with LLMs is central to solving multi-step problems and complex decision-making. To support efficient reasoning, recent studies have shifted attention from explicit chain-of-thought prompting toward implicit reasoning, where reasoning occurs silently via latent structures without emitting intermediate textual steps. Implicit reasoning brings advantages such as lower generation cost, faster inference, and better alignment with internal computation. Although prior surveys have discussed latent representations in the context of reasoning, a dedicated and mechanism-level examination of how reasoning unfolds internally within LLMs remains absent. This survey fills that gap by introducing a taxonomy centered on execution paradigms, shifting the focus from representational forms to computational strategies. We organize existing methods into three execution paradigms based on extbf{ extit{how and where internal computation unfolds}}: latent optimization, signal-guided control, and layer-recurrent execution. We also review structural, behavioral and representation-based evidence that supports the presence of implicit reasoning in LLMs. We further provide a structured overview of the evaluation metrics and benchmarks used in existing works to assess the effectiveness and reliability of implicit reasoning.We maintain a continuously updated project at: https://github.com/digailab/awesome-llm-implicit-reasoning.
Problem

Research questions and friction points this paper is trying to address.

Examining how reasoning occurs internally within LLMs
Introducing taxonomy focused on computational execution paradigms
Reviewing evidence supporting implicit reasoning in LLMs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Latent optimization for silent reasoning
Signal-guided control of internal computation
Layer-recurrent execution paradigm implementation
๐Ÿ”Ž Similar Papers
No similar papers found.
J
Jindong Li
Hong Kong University of Science and Technology (Guangzhou)
Yali Fu
Yali Fu
Jilin University
LLMsReasoningMultimodal LearningGraph LearningAnomaly Detection
L
Li Fan
Hong Kong University of Science and Technology (Guangzhou)
J
Jiahong Liu
The Chinese University of Hong Kong
Y
Yao Shu
Hong Kong University of Science and Technology (Guangzhou)
Chengwei Qin
Chengwei Qin
HKUST(GZ), NTU
LLMNLP
Menglin Yang
Menglin Yang
HKUST(GZ) | Yale University | CUHK
Hyperbolic Representation LearningTransformerRecommender SystemLLM
Irwin King
Irwin King
The Chinese University of Hong Kong
social computingmachine learningAIgraph neural networksNLP
R
Rex Ying
Yale University