🤖 AI Summary
This study addresses the susceptibility of large language models (LLMs) to confirmation bias in secure code review, which can lead to missed vulnerabilities and enable software supply chain attacks. Through controlled experiments and adversarial pull request simulations, the work provides the first quantitative assessment of confirmation bias effects in LLM-assisted code review, revealing asymmetries across vulnerability types and prompting frameworks, and demonstrating real-world attack feasibility. The research employs multi-model comparisons, adversarial prompt construction, metadata manipulation, and debiasing strategies such as input sanitization and explicit instructions. Results show that confirmation bias reduces vulnerability detection rates by 16%–93%, with adversarial attacks achieving success rates of 35% on GitHub Copilot and 88% on Claude Code. The proposed debiasing interventions significantly restore detection performance.
📝 Abstract
Security code reviews increasingly rely on systems integrating Large Language Models (LLMs), ranging from interactive assistants to autonomous agents in CI/CD pipelines. We study whether confirmation bias (i.e., the tendency to favor interpretations that align with prior expectations) affects LLM-based vulnerability detection, and whether this failure mode can be exploited in software supply-chain attacks. We conduct two complementary studies.
Study 1 quantifies confirmation bias through controlled experiments on 250 CVE vulnerability/patch pairs evaluated across four state-of-the-art models under five framing conditions for the review prompt. Framing a change as bug-free reduces vulnerability detection rates by 16-93%, with strongly asymmetric effects: false negatives increase sharply while false positive rates change little. Bias effects vary by vulnerability type, with injection flaws being more susceptible to them than memory corruption bugs.
Study 2 evaluates exploitability in practice mimicking adversarial pull requests that reintroduce known vulnerabilities while framed as security improvements or urgent functionality fixes via their pull request metadata. Adversarial framing succeeds in 35% of cases against GitHub Copilot (interactive assistant) under one-shot attacks and in 88% of cases against Claude Code (autonomous agent) in real project configurations where adversaries can iteratively refine their framing to increase attack success. Debiasing via metadata redaction and explicit instructions restores detection in all interactive cases and 94% of autonomous cases. Our results show that confirmation bias poses a weakness in LLM-based code review, with implications on how AI-assisted development tools are deployed.