π€ AI Summary
To address the critical lack of interpretability in software vulnerability detection, this paper proposes LLMVulExpβa novel framework that systematically investigates the capability of large language models (LLMs) to jointly perform vulnerability detection and multi-dimensional explanation generation (i.e., root-cause analysis, precise localization, and repair guidance). Methodologically, we design an explanation-oriented supervised fine-tuning paradigm and introduce a Chain-of-Thought prompting strategy to enhance reasoning consistency; additionally, we incorporate fine-grained code context modeling to improve semantic understanding. Evaluated on the SeVC benchmark, LLMVulExp achieves a 90.2% F1 score for vulnerability detection while generating structured, high-fidelity explanations. Empirical results demonstrate significant improvements in developer efficiency for both vulnerability localization and patching. This work establishes a new paradigm for LLM-driven security analysis and provides a reproducible, principled technical pathway toward interpretable, end-to-end vulnerability intelligence.
π Abstract
Software vulnerabilities pose significant risks to the security and integrity of software systems. Prior studies have proposed various approaches to vulnerability detection using deep learning or pre-trained models. However, there is still a lack of detailed explanations for understanding vulnerabilities beyond merely detecting their occurrence, which fails to truly help software developers understand and remediate the issues. Recently, large language models (LLMs) have demonstrated remarkable capabilities in comprehending complex contexts and generating content, presenting new opportunities for both detecting and explaining software vulnerabilities. In this paper, we conduct a comprehensive study to investigate the capabilities of LLMs in both detecting and explaining vulnerabilities, and we propose LLMVulExp, a framework that utilizes LLMs for these tasks. Under specialized fine-tuning for vulnerability explanation, our LLMVulExp not only detects the types of vulnerabilities in the code but also analyzes the code context to generate the cause, location, and repair suggestions for these vulnerabilities. These detailed explanations are crucial for helping developers quickly analyze and locate vulnerability issues, providing essential guidance and reference for effective remediation. We find that LLMVulExp can effectively enable the LLMs to perform vulnerability detection (e.g., achieving over a 90% F1 score on the SeVC dataset) and provide detailed explanations. We also explore the potential of using advanced strategies such as Chain-of-Thought (CoT) to guide the LLMs in concentrating on vulnerability-prone code, achieving promising results.