🤖 AI Summary
This work investigates the mechanistic decoupling between confidence and actual problem-solving capability in large language models (LLMs), where high confidence does not reliably indicate correctness. Method: We propose an “evaluation–execution dual-system” framework: the evaluation phase generates confidence signals on a high-dimensional, nonlinear manifold, while the execution phase follows a low-dimensional, concise reasoning trajectory; their geometric separation impedes confidence-based output control. We employ linear probes to decode “solvability belief,” principal component analysis to quantify effective manifold dimensionality, and causal interventions to test representational controllability. Results: Confidence is linearly decodable along a universal axis, yet the evaluation manifold exhibits significantly higher intrinsic dimensionality than the execution manifold; minimal linear steering fails to alter final outputs, revealing strong internal constraints on execution dynamics. This study is the first to attribute LLM overconfidence to representational geometry—challenging the prevailing assumption that decodability implies controllability.
📝 Abstract
Large language models (LLMs) often exhibit a puzzling disconnect between their asserted confidence and actual problem-solving competence. We offer a mechanistic account of this decoupling by analyzing the geometry of internal states across two phases - pre-generative assessment and solution execution. A simple linear probe decodes the internal "solvability belief" of a model, revealing a well-ordered belief axis that generalizes across model families and across math, code, planning, and logic tasks. Yet, the geometries diverge - although belief is linearly decodable, the assessment manifold has high linear effective dimensionality as measured from the principal components, while the subsequent reasoning trace evolves on a much lower-dimensional manifold. This sharp reduction in geometric complexity from thought to action mechanistically explains the confidence-competence gap. Causal interventions that steer representations along the belief axis leave final solutions unchanged, indicating that linear nudges in the complex assessment space do not control the constrained dynamics of execution. We thus uncover a two-system architecture - a geometrically complex assessor feeding a geometrically simple executor. These results challenge the assumption that decodable beliefs are actionable levers, instead arguing for interventions that target the procedural dynamics of execution rather than the high-level geometry of assessment.