🤖 AI Summary
This work addresses the “spurious correctness” problem in large language models (LLMs) during program execution simulation and programming reasoning. To this end, we propose the Code Execution Simulation (CES) evaluation task—the first systematic benchmark assessing logical coherence and cross-path reasoning consistency. Methodologically, we introduce a three-dimensional metric framework: variable-level prediction accuracy, logical coherence classification, and strong/weak/random consistency quantification—revealing LLMs’ randomness and weak consistency in path-sensitive analysis. Evaluating 16 state-of-the-art LLMs, we find that while 81.42% exhibit basic execution coherence, leading models suffer severe reasoning inconsistency. Crucially, on tasks requiring genuine execution-aware reasoning—such as bug localization—most models rely on data leakage or heuristic shortcuts rather than intrinsic symbolic execution capability. This work establishes a novel, interpretable paradigm for evaluating LLMs’ program reasoning abilities, providing both a principled assessment framework and a diagnostic benchmark for execution fidelity.
📝 Abstract
This paper proposes CES, a task to evaluate the abilities of LLMs in simulating program execution and using that reasoning in programming tasks. Besides measuring the correctness of variable predictions during execution simulation, CES introduces the notion of coherence to determine whether the simulation complies with commonsense execution logic, even if the predicted values along the simulations are incorrect. This enables CES to rule out suspiciously correct output predictions due to reasoning shortcuts, hallucinations, or potential data leakage. CES also introduces a novel metric to measure reasoning consistency across tests with the same or different prime path coverage in a spectrum: strong, weak, and random. Evaluating 16 LLMs (including three reasoning LLMs) using CES indicates 81.42% coherent execution simulation on HumanEval, 46.92% and 53.08% of which result in correct and incorrect output predictions. Frontier LLMs such as GPT-4 and DeepSeek-R1 have the most incoherent execution reasoning, mostly due to natural language shortcuts. Despite relatively coherent execution simulation, LLMs' reasoning performance across different tests is inconsistent, mostly random (48.87%) or weak (45.37%), potentially explaining their weakness in programming tasks that require path-sensitive program analysis to succeed. We also compare CES with bug prediction/localization/repair, which intuitively requires control- and data-flow awareness. We observe that LLMs barely incorporate execution reasoning into their analysis for bug-related tasks, and their success is primarily due to inherent abilities in pattern matching or natural language shortcuts, if not data leakage. Without reasoning, there is a threat to the generalizability of LLMs in dealing with unseen bugs or patterns in different contexts. CES can be used to vet the suspicious success of LLMs in these tasks systematically.