🤖 AI Summary
This work addresses the prevalent inconsistency between forward and backward execution in large language models (LLMs) for code-related tasks, which undermines the correctness of reversible reasoning. To systematically evaluate this issue, the authors introduce RoundTripCodeEval (RTCE), the first benchmark specifically designed to assess round-trip consistency in code LLMs through four categories of execution-free, exact-match bijective reasoning tasks. Leveraging zero-shot prompting, execution-trace-based supervised fine-tuning, and self-reflection mechanisms—coupled with a bijectivity fidelity metric—the study reveals fundamental deficiencies in the internal reasoning coherence of state-of-the-art Code-LLMs. Despite various enhancements, performance gains remain marginal, underscoring that modeling consistency remains a critical and unresolved challenge in code generation and reasoning.
📝 Abstract
LLMs demonstrate strong performance on code benchmarks, yet round-trip code execution reveals limitations in their ability to maintain consistent reasoning across forward and backward execution. We present RoundTripCodeEval (RTCE), a comprehensive benchmark consisting of four distinct code execution reasoning tasks designed to rigorously test round-trip consistency. RTCE provides an execution-free, exact-match evaluation of bijection fidelity, assessing whether models preserve a consistent one-to-one mapping between encoding and decoding operations across various algorithms and directions. We systematically evaluate state-of-the-art Code-LLMs using zero-shot prompting, supervised fine-tuning on execution traces, and self-reflection mechanisms. Each yields modest improvements, but none closes the gap, indicating that current LLMs struggle with true round-trip consistency, which demonstrates that they lack the internal coherence required for trustworthy code reasoning. RTCE surfaces several new and previously unmeasured insights that are not captured by existing I/O-prediction, execution-reasoning, or round-trip natural-language benchmarks. We will release the code and the dataset upon acceptance.