Model See, Model Do? Exposure-Aware Evaluation of Bug-vs-Fix Preference in Code LLMs

📅 2026-01-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large code language models are prone to exposure bias in their training data, leading them to favor generating buggy code over correct fixes and thereby undermining reliability. This work proposes the first exposure-aware evaluation framework, which leverages Data Portraits for membership inference and explicitly links a model’s preference for bugs versus fixes to whether it was exposed to corresponding samples during training, using the ManySStuBs4J benchmark and Stack-V2 corpus. Experiments reveal that 67% of samples were unexposed; models consistently prefer reproducing bugs, and exposure to buggy code significantly amplifies this tendency. Different likelihood metrics exhibit divergent behaviors—minimum and maximum token probabilities stably favor fixes, whereas the Gini coefficient reverses preference when only bugs are observed. This study uncovers risks of memorization-driven error propagation and introduces a novel paradigm of stratified evaluation based on exposure status.

Technology Category

Application Category

📝 Abstract
Large language models are increasingly used for code generation and debugging, but their outputs can still contain bugs, that originate from training data. Distinguishing whether an LLM prefers correct code, or a familiar incorrect version might be influenced by what it's been exposed to during training. We introduce an exposure-aware evaluation framework that quantifies how prior exposure to buggy versus fixed code influences a model's preference. Using the ManySStuBs4J benchmark, we apply Data Portraits for membership testing on the Stack-V2 corpus to estimate whether each buggy and fixed variant was seen during training. We then stratify examples by exposure and compare model preference using code completion as well as multiple likelihood-based scoring metrics We find that most examples (67%) have neither variant in the training data, and when only one is present, fixes are more frequently present than bugs. In model generations, models reproduce buggy lines far more often than fixes, with bug-exposed examples amplifying this tendency and fix-exposed examples showing only marginal improvement. In likelihood scoring, minimum and maximum token-probability metrics consistently prefer the fixed code across all conditions, indicating a stable bias toward correct fixes. In contrast, metrics like the Gini coefficient reverse preference when only the buggy variant was seen. Our results indicate that exposure can skew bug-fix evaluations and highlight the risk that LLMs may propagate memorised errors in practice.
Problem

Research questions and friction points this paper is trying to address.

code LLMs
bug-fix preference
training data exposure
model evaluation
memorization bias
Innovation

Methods, ideas, or system contributions that make the work stand out.

exposure-aware evaluation
code LLMs
bug-fix preference
Data Portraits
membership inference
🔎 Similar Papers
No similar papers found.
Ali Al-Kaswan
Ali Al-Kaswan
PhD Candidate, Delft University of Technology
Software EngineeringNLPMachine LearningCyber Security
C
Claudio Spiess
University of California at Davis
P
P. Devanbu
University of California at Davis
A
A. Deursen
Delft University of Technology
M
M. Izadi
Delft University of Technology