๐ค AI Summary
Existing research lacks a mechanistic, system-level analysis of implicit reasoning in large language models (LLMs). This paper introduces the first taxonomy of implicit reasoning centered on *execution paradigms*, departing from conventional representation-based analyses and instead characterizing reasoning strategies computationally. We categorize implicit reasoning into three types: latent optimization, signal-guided control, and layer-wise recurrent execution. Leveraging intermediate-layer activation analysis, dynamic signal modulation, intra-layer recursive modeling, and behavioral interpretability experiments, we integrate structural, behavioral, and representational evidence to uncover how reasoning โsilently unfoldsโ within internal representations. Furthermore, we establish a systematic evaluation framework covering mainstream benchmarks and low-latency inference metrics. All code and resources are publicly released and actively maintained.
๐ Abstract
Large Language Models (LLMs) have demonstrated strong generalization across a wide range of tasks. Reasoning with LLMs is central to solving multi-step problems and complex decision-making. To support efficient reasoning, recent studies have shifted attention from explicit chain-of-thought prompting toward implicit reasoning, where reasoning occurs silently via latent structures without emitting intermediate textual steps. Implicit reasoning brings advantages such as lower generation cost, faster inference, and better alignment with internal computation. Although prior surveys have discussed latent representations in the context of reasoning, a dedicated and mechanism-level examination of how reasoning unfolds internally within LLMs remains absent. This survey fills that gap by introducing a taxonomy centered on execution paradigms, shifting the focus from representational forms to computational strategies. We organize existing methods into three execution paradigms based on extbf{ extit{how and where internal computation unfolds}}: latent optimization, signal-guided control, and layer-recurrent execution. We also review structural, behavioral and representation-based evidence that supports the presence of implicit reasoning in LLMs. We further provide a structured overview of the evaluation metrics and benchmarks used in existing works to assess the effectiveness and reliability of implicit reasoning.We maintain a continuously updated project at: https://github.com/digailab/awesome-llm-implicit-reasoning.