🤖 AI Summary
This work reframes code verification as a mechanistic interpretability task, addressing the limitations of existing approaches that rely on external mechanisms—such as unit tests or auxiliary language models—which are costly and constrained by the evaluator’s capabilities. The study demonstrates for the first time that large language models internally encode decodable signals of code correctness within their neural dynamics. By constructing line-level attribution graphs from residual stream activations during code generation, the authors extract algorithmic traces that reveal structural signatures distinguishing logically correct from incorrect implementations. They propose an introspective verification paradigm grounded in these attribution graphs, enabling causal interventions to rectify logical errors. The approach exhibits cross-lingual robustness across Python, C++, and Java, with topological features of the attribution graphs outperforming surface-level heuristics in correctness prediction and facilitating targeted repairs.
📝 Abstract
Current paradigms for code verification rely heavily on external mechanisms-such as execution-based unit tests or auxiliary LLM judges-which are often labor-intensive or limited by the judging model's own capabilities. This raises a fundamental, yet unexplored question: Can an LLM's functional correctness be assessed purely from its internal computational structure? Our primary objective is to investigate whether the model's neural dynamics encode internally decodable signals that are predictive of logical validity during code generation. Inspired by mechanistic interpretability, we propose to treat code verification as a mechanistic diagnostic task, mapping the model's explicit algorithmic trajectory into line-level attribution graphs. By decomposing complex residual flows, we aim to identify the structural signatures that distinguish sound reasoning from logical failure within the model's internal circuits. Analysis across Python, C++, and Java confirms that intrinsic correctness signals are robust across diverse syntaxes. Topological features from these internal graphs predict correctness more reliably than surface heuristics and enable targeted causal interventions to fix erroneous logic. These findings establish internal introspection as a decodable property for verifying generated code. Our code is at https:// github.com/bruno686/CodeCircuit.