Do Code Semantics Help? A Comprehensive Study on Execution Trace-Based Information for Code Large Language Models

📅 2025-09-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Code large language models (LLMs) struggle to accurately reason about program runtime behavior, primarily due to inconsistent and fragmented modeling of runtime semantics—especially execution traces—limiting their generalization and reasoning capabilities. This paper presents the first systematic empirical study of execution trace semantics in code LLMs, proposing a unified framework that flexibly integrates multi-granularity execution semantics across prompting, supervised fine-tuning, and inference stages. Experiments reveal that existing semantic injection methods yield only marginal improvements under fine-tuning and zero-shot or few-shot test-time scaling—challenging the widely held assumption that incorporating execution information inherently boosts performance. Our findings delineate critical boundary conditions for semantic enhancement and provide foundational empirical evidence and methodological guidance for interpretable modeling and efficient reasoning in code LLMs.

Technology Category

Application Category

📝 Abstract
Code Large Language Models (Code LLMs) have opened a new era in programming with their impressive capabilities. However, recent research has revealed critical limitations in their ability to reason about runtime behavior and understand the actual functionality of programs, which poses significant challenges for their post-training and practical deployment. Specifically, Code LLMs encounter two principal issues: (1) a lack of proficiency in reasoning about program execution behavior, as they struggle to interpret what programs actually do during runtime, and (2) the inconsistent and fragmented representation of semantic information, such as execution traces, across existing methods, which hinders their ability to generalize and reason effectively. These challenges underscore the necessity for more systematic approaches to enhance the reasoning capabilities of Code LLMs. To address these issues, we introduce a generic framework to support integrating semantic information~(e.g., execution trace) to code task-relevant prompts, and conduct a comprehensive study to explore the role of semantic information in enhancing the reasoning ability of Code LLMs accordingly. Specifically, we focus on investigating the usefulness of trace-based semantic information in boosting supervised fine-tuning~(SFT) and post-phase inference of Code LLMs. The experimental results surprisingly disagree with previous works and demonstrate that semantic information has limited usefulness for SFT and test time scaling of Code LLM.
Problem

Research questions and friction points this paper is trying to address.

Enhancing Code LLMs' reasoning about program execution behavior
Addressing inconsistent representation of semantic information like execution traces
Investigating limited usefulness of trace-based semantics for Code LLMs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrating execution traces into code prompts
Studying trace-based semantics for model enhancement
Evaluating semantic information in fine-tuning and inference
🔎 Similar Papers
No similar papers found.
J
Jian Wang
Singapore Management University, Singapore
Xiaofei Xie
Xiaofei Xie
Singapore Management University
Software EngineeringLoop AnalysisTestingDeep Learning
Q
Qiang Hu
Tianjin University, China
Shangqing Liu
Shangqing Liu
Nanjing University
Software EngineeringDeep Learning
Y
Yi Li
Nanyang Technological University, Singapore