Prompting Techniques for Secure Code Generation: A Systematic Investigation

📅 2024-07-09
🏛️ arXiv.org
📈 Citations: 6
Influential: 1
📄 PDF
🤖 AI Summary
This study systematically investigates how prompt engineering techniques affect the security of code generated by large language models (LLMs). Addressing the problem of security vulnerabilities in LLM-generated code, we evaluate diverse prompting strategies on GPT-3, GPT-3.5, and GPT-4 using 150 security-sensitive natural language instructions. We propose the first systematic taxonomy of prompt engineering tailored to secure code generation. Our key method, Recursive Criticism and Improvement (RCI), iteratively refines code through security-focused critique and revision. Results show RCI consistently reduces security defect rates across all models—by an average of 37.2%—and improves OWASP Top 10 vulnerability detection accuracy by 19.6% over baseline prompting, without compromising functional correctness. This work establishes the first empirically grounded, prompt-based framework for enhancing code security in LLMs, accompanied by a reproducible, practice-oriented guideline for secure AI-assisted development.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) are gaining momentum in software development with prompt-driven programming enabling developers to create code from natural language (NL) instructions. However, studies have questioned their ability to produce secure code and, thereby, the quality of prompt-generated software. Alongside, various prompting techniques that carefully tailor prompts have emerged to elicit optimal responses from LLMs. Still, the interplay between such prompting strategies and secure code generation remains under-explored and calls for further investigations. OBJECTIVE: In this study, we investigate the impact of different prompting techniques on the security of code generated from NL instructions by LLMs. METHOD: First we perform a systematic literature review to identify the existing prompting techniques that can be used for code generation tasks. A subset of these techniques are evaluated on GPT-3, GPT-3.5, and GPT-4 models for secure code generation. For this, we used an existing dataset consisting of 150 NL security-relevant code-generation prompts. RESULTS: Our work (i) classifies potential prompting techniques for code generation (ii) adapts and evaluates a subset of the identified techniques for secure code generation tasks and (iii) observes a reduction in security weaknesses across the tested LLMs, especially after using an existing technique called Recursive Criticism and Improvement (RCI), contributing valuable insights to the ongoing discourse on LLM-generated code security.
Problem

Research questions and friction points this paper is trying to address.

Impact of prompting techniques
Secure code generation by LLMs
Reduction in security weaknesses
Innovation

Methods, ideas, or system contributions that make the work stand out.

Systematic prompting technique evaluation
Adaptation of techniques for security
Reduction in code security weaknesses
🔎 Similar Papers
No similar papers found.