Estimating Correctness Without Oracles in LLM-Based Code Generation

📅 2025-06-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Evaluating the correctness of code generated by large language models (LLMs) is challenging when no ground-truth oracle (i.e., reference implementation or definitive correctness criterion) is available. Method: This paper proposes an oracle-free evaluation framework based on *incoherence*—a measure quantifying logical contradictions between natural language specifications and generated code. The incoherence score is efficiently computable without code execution or access to correct implementations, and serves as a sound lower bound on error presence. Contribution/Results: The method achieves zero false positives while detecting approximately 67% of erroneous programs. Empirical evaluation shows strong rank correlation (average Spearman ρ > 0.85) between incoherence-based rankings and oracle-based correctness assessments across diverse LLM outputs. By eliminating dependence on oracles, the approach significantly improves evaluation efficiency and scalability, establishing a robust, unsupervised paradigm for code quality assessment.

Technology Category

Application Category

📝 Abstract
Generating code from natural language specifications is one of the most successful applications of Large Language Models (LLMs). Yet, they hallucinate: LLMs produce outputs that may be grammatically correct but are factually incorrect. Without an existing, correct implementation (i.e., an oracle), can we quantify how likely the generated program is correct? In this paper, we propose a measure of incorrectness, called incoherence, that can be estimated efficiently in the absence of an oracle and provides a lower bound on the error, i.e., the probability that the LLM-generated program for that specification is incorrect. Our experiments demonstrate an extraordinary effectiveness. For the average code generation task, our incoherence-based methodology can automatically identify about two-thirds of incorrect programs without reports of false positives. In fact, an oracle-based evaluation of LLMs can be reliably replaced by an incoherence-based evaluation. In particular, we find a very strong agreement between the ranking of LLMs by the number of programs deemed correct via an oracle (pass@1) and the ranking of LLMs by the number of programs deemed correct via our incoherence.
Problem

Research questions and friction points this paper is trying to address.

Estimating correctness of LLM-generated code without oracles
Quantifying likelihood of factual errors in generated programs
Automatically identifying incorrect programs without false positives
Innovation

Methods, ideas, or system contributions that make the work stand out.

Estimates program correctness without oracles
Uses incoherence as incorrectness measure
Automatically identifies incorrect programs effectively
🔎 Similar Papers
No similar papers found.
T
Thomas Valentin
ENS Paris-Saclay, 4 av des Sciences, 91190 Gif-sur-Yvette, France
A
Ardi Madadi
Max Planck Institute for Security and Privacy, Universitätsstraße 140 44799 Bochum
G
Gaetano Sapia
Max Planck Institute for Security and Privacy, Universitätsstraße 140 44799 Bochum
Marcel Böhme
Marcel Böhme
Max Planck Institute for Security & Privacy
Software TestingSoftware SecurityFuzzing