Generating Verifiable CoT from Execution-Traces

📅 2025-11-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing synthetic chain-of-thought (CoT) data often relies on teacher models to generate “plausible-sounding” yet unverifiable reasoning steps, leading language models to internalize logical hallucinations. To address this, we propose Execution-Traced CoT: a method that instruments code execution to capture ground-truth program traces and structurally maps them to natural-language reasoning steps—each strictly verifiable via observable program behavior. This enables bidirectional verifiability: forward (execution → reasoning) and backward (reasoning → execution). Using this approach, we construct high-fidelity training data and perform supervised fine-tuning of language models. On code reasoning benchmarks, our method improves prediction accuracy by up to 30 percentage points (output) and 28 percentage points (input), while substantially enhancing logical consistency and trustworthiness in both code generation and explanation.

Technology Category

Application Category

📝 Abstract
Teaching language models to reason about code execution remains a fundamental challenge. While Chain-of-Thought (CoT) prompting has shown promise, current synthetic training data suffers from a critical weakness: the reasoning steps are often plausible-sounding explanations generated by teacher models, not verifiable accounts of what the code actually does. This creates a troubling failure mode where models learn to mimic superficially convincing but logically flawed reasoning patterns. We address this by grounding CoT generation directly in program execution traces. Our pipeline instruments code to capture its dynamic behavior, then narrates these verified execution traces into natural language rationales that are correct by construction. This execution-grounded approach ensures every reasoning step reflects what the program genuinely computes, eliminating logical hallucinations at the source. We evaluate our method on code reasoning tasks (forward reasoning on CruxEval and LiveCodeBench-Exec, backward reasoning on CruxEval-Input), as well as code generation and explanation tasks from HumanEval. Models trained on our bi-directional trace-grounded data achieve substantial improvements, with gains of up to 30 points on output prediction and 28 points on input prediction over base models, alongside improved explanation and code generation, demonstrating that verifiable reasoning fundamentally enhances model capabilities. https://github.ibm.com/IBM-Research-AI/Verified-Code-CoT
Problem

Research questions and friction points this paper is trying to address.

Generates verifiable reasoning from execution traces
Addresses logical flaws in synthetic training data
Improves code reasoning and generation tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Generates verifiable reasoning from execution traces
Narrates program behavior into natural language rationales
Eliminates logical hallucinations by grounding in actual computation
🔎 Similar Papers
No similar papers found.