A Tool for In-depth Analysis of Code Execution Reasoning of Large Language Models

📅 2025-01-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing LLM code-reasoning evaluation tools operate solely at the input-output level, lacking fine-grained attribution analysis of execution dynamics—thus hindering deep capability understanding and targeted model optimization. Method: We introduce ExeRScope, a novel toolkit enabling systematic, attribute–performance correlation analysis for code reasoning. It supports cross-benchmark, dataset-free diagnostic generalization via integrated techniques: dynamic program slicing, variable-state trajectory comparison, structured code-quality quantification (e.g., control-flow complexity, data-dependency depth), and heuristic attribution algorithms. Contribution/Results: Evaluated on benchmarks including CruxEval, ExeRScope is the first to empirically uncover strong correlations between structural code properties and LLM error patterns. It significantly improves analytical reproducibility and conclusion transferability across models and tasks, thereby filling a critical gap in fine-grained, execution-aware code-reasoning diagnosis.

Technology Category

Application Category

📝 Abstract
Code Executing Reasoning is becoming a new non-functional metric that assesses the ability of large language models (LLMs) in programming tasks. State-of-the-art frameworks (CodeMind or REval) and benchmarks (CruxEval) usually focus on LLM's prediction of a given code's input/output or intermediate variable states/values on limited programs. However, there is no tool for more in-depth analysis of the results. Without such a tool, the observations about LLM's code execution reasoning cannot be generalized to more datasets, preventing the research community and practitioners from devising the next generation of LLMs with better code execution reasoning abilities. This paper introduces ExeRScope, a series of tools and heuristics to analyze the result of code execution reasoning frameworks to understand better the impact of code properties in the studied benchmarks on the code execution reasoning. With such tooling, analysis can be generalized to code with similar properties without the urgent need to design more benchmarks, which is a cumbersome effort.
Problem

Research questions and friction points this paper is trying to address.

Language Model
Code Execution
Reasoning Analysis
Innovation

Methods, ideas, or system contributions that make the work stand out.

ExeRScope
Code Reasoning
Large Language Models
🔎 Similar Papers
No similar papers found.