SecVulEval: Benchmarking LLMs for Real-World C/C++ Vulnerability Detection

📅 2025-05-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing vulnerability detection benchmarks predominantly employ function-level binary classification, lacking statement-level fine-grained annotations, program-level contextual information (e.g., data/control flow, interprocedural dependencies), and comprehensive coverage of real-world CVEs—leading to inflated performance estimates and poor generalizability. Method: We introduce SecVulEval, the first fine-grained benchmark for real C/C++ vulnerabilities, covering 5,867 CVEs (1999–2024) and 25,440 functions. It features CVE-driven, human-verified statement-level labels; integrates static analysis–derived program-level context; and rigorously cleans mislabeled, duplicate, and inconsistent samples. Contribution/Results: Leveraging a multi-agent collaborative evaluation framework, we find that the state-of-the-art LLM Claude-3.7-Sonnet achieves only 23.83% F1 at the statement level—revealing fundamental limitations in precise vulnerability localization and causal reasoning. SecVulEval establishes a rigorous, realistic baseline for secure AI evaluation.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) have shown promise in software engineering tasks, but evaluating their effectiveness in vulnerability detection is challenging due to the lack of high-quality datasets. Most existing datasets are limited to function-level labels, ignoring finer-grained vulnerability patterns and crucial contextual information. Also, poor data quality such as mislabeling, inconsistent annotations, and duplicates can lead to inflated performance and weak generalization. Moreover, by including only the functions, these datasets miss broader program context, like data/control dependencies and interprocedural interactions, that are essential for accurately understanding real-world security flaws. Without this context, detection models are evaluated under unrealistic assumptions. To address these limitations, this paper introduces SecVulEval, a benchmark designed to support fine-grained evaluation of LLMs and other detection methods with rich contextual information. SecVulEval focuses on real-world C/C++ vulnerabilities at the statement level. This granularity enables more precise evaluation of a model's ability to localize vulnerabilities, beyond simple binary classification at the function level. By incorporating rich contextual information, SecVulEval sets a new standard for vulnerability detection benchmarks in realistic scenarios. This benchmark includes 25,440 function samples covering 5,867 unique CVEs in C/C++ projects from 1999 to 2024. We evaluated the SOTA LLMs with a multi-agent-based approach. The evaluation on our dataset shows that the models are still far from accurately predicting vulnerable statements in a given function. The best-performing Claude-3.7-Sonnet model achieves 23.83% F1-score for detecting vulnerable statements with correct reasoning. Finally, we analyze the LLM outputs and provide insights into their behavior in vulnerability detection for C/C++.
Problem

Research questions and friction points this paper is trying to address.

Lack of high-quality datasets for LLM vulnerability detection evaluation
Existing datasets miss fine-grained patterns and program context
Poor data quality inflates performance and weakens generalization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Statement-level vulnerability detection benchmark
Multi-agent-based LLM evaluation approach
Rich contextual information integration
🔎 Similar Papers
No similar papers found.
M
Md Basim Uddin Ahmed
York University
N
Nima Shiri Harzevili
York University
Jiho Shin
Jiho Shin
Ph.D. Candidate, York University
Software EngineeringSoftware AnalyticsAI4SESoftware Testing
H
Hung Viet Pham
York University
S
Song Wang
York University