🤖 AI Summary
Existing security analysis tools suffer from poor generalizability, high false-positive rates, and coarse-grained detection, leading to inefficient code review. Method: This paper systematically evaluates six large language models (LLMs) across five prompt types for detecting five categories of security vulnerabilities, benchmarking them against state-of-the-art static analysis tools. Contribution/Results: We present the first empirical evidence that GPT-4, when guided by CWE-informed prompts, significantly outperforms leading static analyzers. However, LLMs introduce novel challenges—including response redundancy and task deviation. Through linguistic analysis and regression modeling, we identify code length, functional density, and developer engagement level as key determinants of LLM detection accuracy. Specifically, LLMs achieve higher precision on short, functionally dense, and low-developer-engagement code segments. Our work establishes a reproducible evaluation framework and actionable optimization pathways for integrating LLMs into secure code review workflows.
📝 Abstract
Security code review is a time-consuming and labor-intensive process typically requiring integration with automated security defect detection tools. However, existing security analysis tools struggle with poor generalization, high false positive rates, and coarse detection granularity. Large Language Models (LLMs) have been considered promising candidates for addressing those challenges. In this study, we conducted an empirical study to explore the potential of LLMs in detecting security defects during code review. Specifically, we evaluated the performance of six LLMs under five different prompts and compared them with state-of-theart static analysis tools. We also performed linguistic and regression analyses for the best-performing LLM to identify quality problems in its responses and factors influencing its performance. Our findings show that: (1) existing pre-trained LLMs have limited capability in security code review but? significantly outperform the state-of-the-art static analysis tools. (2) GPT-4 performs best among all LLMs when provided with a CWE list for reference. (3) GPT-4 frequently generates responses that are verbose or not compliant with the task requirements given in the prompts. (4) GPT-4 is more adept at identifying security defects in code files with fewer tokens, containing functional logic, or written by developers with less involvement in the project.