🤖 AI Summary
Large language models (LLMs) face a fundamental semantic gap between textual pattern training and execution-level semantic correctness in code generation. Existing execution-based reward methods—relying solely on binary pass/fail signals—fail to capture subtle logical errors, resulting in insufficient alignment between generated text and intended semantics. To address this, we propose variable-level execution trajectory modeling, which establishes fine-grained alignment between code text and its runtime state. This enables the construction of verifiable, dense semantic consistency rewards, directly inferred from policy rollouts without requiring external oracles, and compatible with diverse reinforcement learning algorithms. Experiments demonstrate consistent improvements: an average +4.6% gain in pass@1 across benchmarks; +15.5% and +4.4% absolute accuracy gains on code reasoning and test output generation tasks, respectively; and significantly enhanced generalization across model architectures and RL algorithms.
📝 Abstract
While Large Language Models (LLMs) excel at code generation by learning from vast code corpora, a fundamental semantic gap remains between their training on textual patterns and the goal of functional correctness, which is governed by formal execution semantics. Reinforcement Learning with Verifiable Rewards (RLVR) approaches attempt to bridge this gap using outcome rewards from executing test cases. However, solely relying on binary pass/fail signals is inefficient for establishing a well-aligned connection between the textual representation of code and its execution semantics, especially for subtle logical errors within the code. In this paper, we propose CodeRL+, a novel approach that integrates execution semantics alignment into the RLVR training pipeline for code generation. CodeRL+ enables the model to infer variable-level execution trajectory, providing a direct learning signal of execution semantics. CodeRL+ can construct execution semantics alignment directly using existing on-policy rollouts and integrates seamlessly with various RL algorithms. Extensive experiments demonstrate that CodeRL+ outperforms post-training baselines (including RLVR and Distillation), achieving a 4.6% average relative improvement in pass@1. CodeRL+ generalizes effectively to other coding tasks, yielding 15.5% and 4.4% higher accuracy on code-reasoning and test-output-generation benchmarks, respectively. CodeRL+ shows strong applicability across diverse RL algorithms and LLMs. Furthermore, probe analyses provide compelling evidence that CodeRL+ strengthens the alignment between code's textual representations and its underlying execution semantics.