Diverse LLMs vs. Vulnerabilities: Who Detects and Fixes Them Better?

📅 2025-12-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) exhibit limited reliability in software vulnerability detection (SVD) and repair (SVR), particularly for complex vulnerabilities, suffering from low detection accuracy and untrustworthy patch generation. To address these challenges, we propose DVDR-LLM, a novel ensemble framework leveraging diverse LLMs through a voting-based integration mechanism. Our method introduces cross-model consistency analysis and an adjustable consensus threshold to jointly optimize detection and verification, explicitly modeling the trade-off between false positives and false negatives. DVDR-LLM supports multi-file and cross-context vulnerability scenarios and employs code-level fine-grained annotations for evaluation. Experimental results demonstrate a 10–12% improvement in detection accuracy, an 18% increase in recall for multi-file vulnerabilities, and an 11.8% gain in F1-score. The framework is publicly open-sourced.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) are increasingly being studied for Software Vulnerability Detection (SVD) and Repair (SVR). Individual LLMs have demonstrated code understanding abilities, but they frequently struggle when identifying complex vulnerabilities and generating fixes. This study presents DVDR-LLM, an ensemble framework that combines outputs from diverse LLMs to determine whether aggregating multiple models reduces error rates. Our evaluation reveals that DVDR-LLM achieves 10-12% higher detection accuracy compared to the average performance of individual models, with benefits increasing as code complexity grows. For multi-file vulnerabilities, the ensemble approach demonstrates significant improvements in recall (+18%) and F1 score (+11.8%) over individual models. However, the approach raises measurable trade-offs: reducing false positives in verification tasks while simultaneously increasing false negatives in detection tasks, requiring careful decision on the required level of agreement among the LLMs (threshold) for increased performance across different security contexts. Artifact: https://github.com/Erroristotle/DVDR_LLM
Problem

Research questions and friction points this paper is trying to address.

Improves software vulnerability detection accuracy using ensemble LLMs
Addresses complex and multi-file vulnerability identification challenges
Balances trade-offs between false positives and negatives in detection
Innovation

Methods, ideas, or system contributions that make the work stand out.

Ensemble framework aggregates diverse LLMs for vulnerability detection
Combines multiple model outputs to reduce errors and improve accuracy
Adjusts agreement threshold to balance detection and verification trade-offs
🔎 Similar Papers
No similar papers found.