๐ค AI Summary
Large language models (LLMs) pretrained on vulnerable open-source code tend to generate insecure code. Method: We propose parameter-efficient fine-tuning (PEFT) using LoRA and IA3 on a curated dataset of C/C++ vulnerability-fix commitsโthe first systematic evaluation of PEFT for security-aware code generation. We introduce a novel function-level and code-block-level sample organization paradigm, superior to file- or line-level alternatives, and construct an adversarial evaluation benchmark comprising 14,622 files covering 52 prevalent CWEs. Results: Our approach improves secure code generation rates by +6.4% for C and +5.4% for C++, demonstrating that fine-grained vulnerability-fix data significantly enhances model security alignment. This work establishes a reproducible, cost-effective PEFT pathway for security-oriented code generation.
๐ Abstract
AI-powered coding assistants such as GitHub Copilot and OpenAI ChatGPT have achieved notable success in automating code generation. However, these tools rely on pre-trained Large Language Models (LLMs) that are typically trained on human-written code sourced from open-source project hosting sites like GitHub, which often contains inherent security vulnerabilities. These vulnerabilities may then be mirrored in the code generated by these LLMs, a critical risk revealed and highlighted by recent empirical studies. In this work, we present an exploratory study on whether fine-tuning pre-trained LLMs on datasets of vulnerability-fixing commits can promote secure code generation. We explored two parameter-efficient fine-tuning techniques (LoRa and IA3) on two pre-trained LLMs for code generation. We crawled a fine-tuning dataset (14,622 C and C++ files) for secure code generation by collecting code fixes of confirmed vulnerabilities from open-source repositories. Our evaluation dataset comprises 52 vulnerability scenarios designed to cover the top most dangerous C and C++ Common Weakness Enumerations (CWEs). Each scenario is a prompt that may induce LLMs to generate vulnerable code. Our exploration reveals that fine-tuning LLMs can improve secure code generation by 6.4% in C language and 5.4% in C++ language. We further experimented with fine-tuning LLMs using different versions of the collected secure code dataset (block, function, and line). We found that fine-tuning with function-level and block-level datasets achieves the best secure code generation performance, compared to the alternatives (file-level and line-level).