🤖 AI Summary
Large language models (LLMs) exhibit a decoupling between output token probabilities and actual code correctness, limiting reliability in AI-assisted programming.
Method: We investigate whether internal hidden states encode implicit representations of code correctness, proposing a contrastive hidden-state analysis framework to probe such signals across multiple state-of-the-art LLMs.
Contribution/Results: We empirically discover—and for the first time extract—a robust, execution-free correctness signal from layer-wise hidden states that reliably discriminates correct from incorrect code. This signal requires neither program execution nor human annotation, and substantially outperforms conventional log-probability scores and verbalized confidence estimates. In code quality ranking and sample filtering tasks, it achieves an average accuracy improvement of 12.7%. Our findings establish a new paradigm for building high-fidelity, execution-free AI programming systems and provide a foundational technical pathway toward trustworthy code generation.
📝 Abstract
Despite the effectiveness of large language models (LLMs) for code generation, they often output incorrect code. One reason is that model output probabilities are often not well-correlated with correctness, and reflect only the final output of the generation process. Inspired by findings that LLMs internally encode concepts like truthfulness, this paper explores if LLMs similarly represent code correctness. Specifically, we identify a correctness representation inside LLMs by contrasting the hidden states between pairs of correct and incorrect code for the same programming tasks. By experimenting on four LLMs, we show that exploiting this extracted correctness representation outperforms standard log-likelihood ranking, as well as verbalized model confidence. Furthermore, we explore how this internal correctness signal can be used to select higher-quality code samples, without requiring test execution. Ultimately, this work demonstrates how leveraging internal representations can enhance code generation systems and make LLMs more reliable, thus improving confidence in automatically generated code.