🤖 AI Summary
Existing retrieval-augmented code generation (RACG) systems neglect security considerations and are vulnerable to knowledge base poisoning attacks, leading to the generation of insecure code. To address this, we propose CodeGuarder, a security-hardening framework that introduces the first “functionality + security” dual-track retrieval paradigm. CodeGuarder constructs a security knowledge base grounded in real-world vulnerability databases and integrates three key mechanisms: subtask-decomposed retrieval, vulnerability-type-aware re-ranking, and security-knowledge-infused prompting—enabling vulnerability-sensitive dynamic filtering and fine-grained security injection. Extensive experiments across multiple large language models demonstrate that CodeGuarder improves code security rates by 20.12% on average under standard settings, and by 31.53% and 21.91% under two distinct poisoning scenarios, respectively—without compromising functional correctness. Furthermore, it exhibits strong cross-language generalization capability.
📝 Abstract
Retrieval-Augmented Code Generation (RACG) leverages external knowledge to enhance Large Language Models (LLMs) in code synthesis, improving the functional correctness of the generated code. However, existing RACG systems largely overlook security, leading to substantial risks. Especially, the poisoning of malicious code into knowledge bases can mislead LLMs, resulting in the generation of insecure outputs, which poses a critical threat in modern software development. To address this, we propose a security-hardening framework for RACG systems, CodeGuarder, that shifts the paradigm from retrieving only functional code examples to incorporating both functional code and security knowledge. Our framework constructs a security knowledge base from real-world vulnerability databases, including secure code samples and root cause annotations. For each code generation query, a retriever decomposes the query into fine-grained sub-tasks and fetches relevant security knowledge. To prioritize critical security guidance, we introduce a re-ranking and filtering mechanism by leveraging the LLMs' susceptibility to different vulnerability types. This filtered security knowledge is seamlessly integrated into the generation prompt. Our evaluation shows CodeGuarder significantly improves code security rates across various LLMs, achieving average improvements of 20.12% in standard RACG, and 31.53% and 21.91% under two distinct poisoning scenarios without compromising functional correctness. Furthermore, CodeGuarder demonstrates strong generalization, enhancing security even when the targeted language's security knowledge is lacking. This work presents CodeGuarder as a pivotal advancement towards building secure and trustworthy RACG systems.