Guiding AI to Fix Its Own Flaws: An Empirical Study on LLM-Driven Secure Code Generation

📅 2025-06-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) frequently generate insecure code by neglecting established security practices, leading to prevalent vulnerabilities. To address this, we propose a collaborative guidance framework that leverages model-self-generated vulnerability prompts and multi-granularity feedback—spanning line-level to function-level—to steer LLMs toward secure code generation and repair. This work presents the first unified, quantitative evaluation of both mainstream closed-source (e.g., GPT-4) and open-source (e.g., CodeLlama) LLMs on canonical security benchmarks, including CodeXGLUE-Security and SWE-bench Security. Experimental results demonstrate that fine-grained feedback significantly reduces vulnerability generation rates (average reduction of 42%) and substantially improves vulnerability repair success (up to 68%). Our study establishes a reproducible methodology for security-aware code synthesis and delivers actionable engineering insights for developing robust, secure AI-assisted programming tools.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) have become powerful tools for automated code generation. However, these models often overlook critical security practices, which can result in the generation of insecure code that contains vulnerabilities-weaknesses or flaws in the code that attackers can exploit to compromise a system. However, there has been limited exploration of strategies to guide LLMs in generating secure code and a lack of in-depth analysis of the effectiveness of LLMs in repairing code containing vulnerabilities. In this paper, we present a comprehensive evaluation of state-of-the-art LLMs by examining their inherent tendencies to produce insecure code, their capability to generate secure code when guided by self-generated vulnerability hints, and their effectiveness in repairing vulnerabilities when provided with different levels of feedback. Our study covers both proprietary and open-weight models across various scales and leverages established benchmarks to assess a wide range of vulnerability types. Through quantitative and qualitative analyses, we reveal that although LLMs are prone to generating insecure code, advanced models can benefit from vulnerability hints and fine-grained feedback to avoid or fix vulnerabilities. We also provide actionable suggestions to developers to reduce vulnerabilities when using LLMs for code generation.
Problem

Research questions and friction points this paper is trying to address.

LLMs often generate insecure code with vulnerabilities
Limited strategies exist for guiding LLMs to produce secure code
Effectiveness of LLMs in repairing vulnerabilities lacks in-depth analysis
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLMs guided by self-generated vulnerability hints
Fine-grained feedback for vulnerability repair
Evaluation across proprietary and open-weight models
H
Hao Yan
George Mason University, Fairfax, USA
S
Swapneel Suhas Vaidya
George Mason University, Fairfax, USA
Xiaokuan Zhang
Xiaokuan Zhang
Assistant Professor, Computer Science, George Mason University
Security and PrivacyXR SecurityWeb3/DeFi SecuritySide ChannelsRust
Z
Ziyu Yao
George Mason University, Fairfax, USA