Exploring the Security Threats of Retriever Backdoors in Retrieval-Augmented Code Generation

📅 2025-12-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work首次 identifies and systematically characterizes backdoor attacks targeting the retriever component in Retrieval-Augmented Code Generation (RACG), addressing a critical gap in supply-chain security research. To overcome the low stealthiness and high detectability of existing attacks, we propose VenomRACG—the first retriever-level backdoor framework—integrating knowledge base poisoning, similarity manipulation, and implicit trigger design. Leveraging latent-space perturbations and token-level covert embedding, VenomRACG ensures poisoned samples are statistically indistinguishable from benign code. With only 0.05% knowledge base poisoning, vulnerable code is retrieved within the top-5 results for 51.29% of queries, causing GPT-4o to generate vulnerable code in over 40% of targeted scenarios—while preserving model utility across standard benchmarks. Crucially, VenomRACG evades multiple state-of-the-art defenses, demonstrating unprecedented stealth and efficacy in RACG supply-chain compromise.

Technology Category

Application Category

📝 Abstract
Retrieval-Augmented Code Generation (RACG) is increasingly adopted to enhance Large Language Models for software development, yet its security implications remain dangerously underexplored. This paper conducts the first systematic exploration of a critical and stealthy threat: backdoor attacks targeting the retriever component, which represents a significant supply-chain vulnerability. It is infeasible to assess this threat realistically, as existing attack methods are either too ineffective to pose a real danger or are easily detected by state-of-the-art defense mechanisms spanning both latent-space analysis and token-level inspection, which achieve consistently high detection rates. To overcome this barrier and enable a realistic analysis, we first developed VenomRACG, a new class of potent and stealthy attack that serves as a vehicle for our investigation. Its design makes poisoned samples statistically indistinguishable from benign code, allowing the attack to consistently maintain low detectability across all evaluated defense mechanisms. Armed with this capability, our exploration reveals a severe vulnerability: by injecting vulnerable code equivalent to only 0.05% of the entire knowledge base size, an attacker can successfully manipulate the backdoored retriever to rank the vulnerable code in its top-5 results in 51.29% of cases. This translates to severe downstream harm, causing models like GPT-4o to generate vulnerable code in over 40% of targeted scenarios, while leaving the system's general performance intact. Our findings establish that retriever backdooring is not a theoretical concern but a practical threat to the software development ecosystem that current defenses are blind to, highlighting the urgent need for robust security measures.
Problem

Research questions and friction points this paper is trying to address.

Investigates security threats from backdoor attacks in retrieval-augmented code generation systems
Develops a stealthy attack method to bypass existing detection mechanisms
Demonstrates severe vulnerabilities enabling generation of insecure code in targeted scenarios
Innovation

Methods, ideas, or system contributions that make the work stand out.

Developed VenomRACG for stealthy retriever backdoor attacks
Made poisoned code statistically indistinguishable from benign samples
Achieved low detectability across all evaluated defense mechanisms
🔎 Similar Papers
No similar papers found.
T
Tian Li
National University of Defense Technology, Changsha, China
B
Bo Lin
National University of Defense Technology, Changsha, China
Shangwen Wang
Shangwen Wang
National University of Defense Technology
software engineering
Yusong Tan
Yusong Tan
National University of Defense Technology
computeroperating systemcloudAI