Large Language Models for Multilingual Vulnerability Detection: How Far Are We?

📅 2025-06-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Prior work predominantly focuses on single-language, function-level vulnerability detection, lacking systematic evaluation of pre-trained language models (PLMs) and large language models (LLMs) across multilingual and multi-granularity settings. Method: This paper introduces the first unified benchmark covering seven programming languages and two granularity levels—function-level and line-level—built upon over 30,000 real-world vulnerability-fix patches. It further proposes an optimized inference strategy combining instruction tuning and few-shot prompting for GPT-4o. Contribution/Results: The approach achieves substantial accuracy gains over PLMs like CodeT5P in high-severity vulnerability identification. Empirical results demonstrate LLMs’ practicality, cross-lingual generalizability, and complementary strengths with PLMs in vulnerability detection. This work establishes a reproducible methodology and empirical benchmark for LLM-based security applications.

Technology Category

Application Category

📝 Abstract
Various deep learning-based approaches utilizing pre-trained language models (PLMs) have been proposed for automated vulnerability detection. With recent advancements in large language models (LLMs), several studies have begun exploring their application to vulnerability detection tasks. However, existing studies primarily focus on specific programming languages (e.g., C/C++) and function-level detection, leaving the strengths and weaknesses of PLMs and LLMs in multilingual and multi-granularity scenarios largely unexplored. To bridge this gap, we conduct a comprehensive fine-grained empirical study evaluating the effectiveness of state-of-the-art PLMs and LLMs for multilingual vulnerability detection. Using over 30,000 real-world vulnerability-fixing patches across seven programming languages, we systematically assess model performance at both the function-level and line-level. Our key findings indicate that GPT-4o, enhanced through instruction tuning and few-shot prompting, significantly outperforms all other evaluated models, including CodeT5P. Furthermore, the LLM-based approach demonstrates superior capability in detecting unique multilingual vulnerabilities, particularly excelling in identifying the most dangerous and high-severity vulnerabilities. These results underscore the promising potential of adopting LLMs for multilingual vulnerability detection at function-level and line-level, revealing their complementary strengths and substantial improvements over PLM approaches. This first empirical evaluation of PLMs and LLMs for multilingual vulnerability detection highlights LLMs' value in addressing real-world software security challenges.
Problem

Research questions and friction points this paper is trying to address.

Evaluating PLMs and LLMs for multilingual vulnerability detection
Assessing model performance at function-level and line-level
Identifying strengths of LLMs in detecting high-severity vulnerabilities
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses GPT-4o with instruction tuning
Evaluates 30,000 patches in 7 languages
Detects multilingual vulnerabilities effectively
🔎 Similar Papers
No similar papers found.
Honglin Shu
Honglin Shu
Kyushu University
AI4SE
Michael Fu
Michael Fu
The University of Melbourne
Software EngineeringDevSecOpsDeep LearningLanguage Models
J
Junji Yu
Tianjin University, China
D
Dong Wang
College of Intelligence and Computing, Tianjin University, China
C
C. Tantithamthavorn
Information Technology, Monash University, Australia
J
Junjie Chen
College of Intelligence and Computing, Tianjin University, China
Yasutaka Kamei
Yasutaka Kamei
Professor, Kyushu University, InaRIS Fellow
Software EngineeringEmpirical Software EngineeringMining Software RepositoriesSoftware Quality