Can LLMs Compress (and Decompress)? Evaluating Code Understanding and Execution via Invertibility

📅 2026-01-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the prevalent inconsistency between forward and backward execution in large language models (LLMs) for code-related tasks, which undermines the correctness of reversible reasoning. To systematically evaluate this issue, the authors introduce RoundTripCodeEval (RTCE), the first benchmark specifically designed to assess round-trip consistency in code LLMs through four categories of execution-free, exact-match bijective reasoning tasks. Leveraging zero-shot prompting, execution-trace-based supervised fine-tuning, and self-reflection mechanisms—coupled with a bijectivity fidelity metric—the study reveals fundamental deficiencies in the internal reasoning coherence of state-of-the-art Code-LLMs. Despite various enhancements, performance gains remain marginal, underscoring that modeling consistency remains a critical and unresolved challenge in code generation and reasoning.

Technology Category

Application Category

📝 Abstract
LLMs demonstrate strong performance on code benchmarks, yet round-trip code execution reveals limitations in their ability to maintain consistent reasoning across forward and backward execution. We present RoundTripCodeEval (RTCE), a comprehensive benchmark consisting of four distinct code execution reasoning tasks designed to rigorously test round-trip consistency. RTCE provides an execution-free, exact-match evaluation of bijection fidelity, assessing whether models preserve a consistent one-to-one mapping between encoding and decoding operations across various algorithms and directions. We systematically evaluate state-of-the-art Code-LLMs using zero-shot prompting, supervised fine-tuning on execution traces, and self-reflection mechanisms. Each yields modest improvements, but none closes the gap, indicating that current LLMs struggle with true round-trip consistency, which demonstrates that they lack the internal coherence required for trustworthy code reasoning. RTCE surfaces several new and previously unmeasured insights that are not captured by existing I/O-prediction, execution-reasoning, or round-trip natural-language benchmarks. We will release the code and the dataset upon acceptance.
Problem

Research questions and friction points this paper is trying to address.

round-trip consistency
code understanding
code execution
invertibility
LLMs
Innovation

Methods, ideas, or system contributions that make the work stand out.

round-trip consistency
code understanding
invertibility
execution-free evaluation
bijection fidelity
🔎 Similar Papers
No similar papers found.
N
Nickil Maveli
School of Informatics, University of Edinburgh
Antonio Vergari
Antonio Vergari
Reader (Associate Professor), University of Edinburgh, UK
Artificial IntelligenceProbabilistic Machine LearningProbabilistic CircuitsNeuro-Symbolic AI
S
Shay B. Cohen
School of Informatics, University of Edinburgh