Evaluating and Enhancing the Vulnerability Reasoning Capabilities of Large Language Models

📅 2026-02-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitations of current large language models in vulnerability detection, which often rely on hallucinations or superficial patterns rather than robust causal reasoning about program behavior. To enable fine-grained evaluation of reasoning processes, the authors propose DAGVul, a novel framework that formulates vulnerability reasoning as a directed acyclic graph (DAG) generation task. DAGVul leverages expert-annotated causal chains and semantically equivalent perturbed code to construct a new benchmark and introduces a reinforcement learning mechanism with verifiable rewards (RLVR) to enhance the consistency and accuracy of generated reasoning structures. Experimental results demonstrate that DAGVul improves reasoning F1 by 18.9%, with its 8B model outperforming both comparable general-purpose and specialized large models—including Qwen3-30B-Reasoning—and approaching the performance of Claude-Sonnet-4.5 (75.47% vs. 76.11%).

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) have demonstrated remarkable proficiency in vulnerability detection. However, a critical reliability gap persists: models frequently yield correct detection verdicts based on hallucinated logic or superficial patterns that deviate from the actual root cause. This misalignment remains largely obscured because contemporary benchmarks predominantly prioritize coarse-grained classification metrics, lacking the granular ground truth required to evaluate the underlying reasoning process. To bridge this gap, we first construct a benchmark consisting of two datasets: (1) real-world vulnerabilities with expert-curated causal reasoning as ground truth, and (2) semantically equivalent code perturbations for assessing reasoning robustness. Our large-scale empirical study reveals that even state-of-the-art models struggle to maintain logical consistency during semantic code comprehension, exhibiting 12 systematic failure patterns. Addressing these limitations, we propose DAGVul, a novel framework that models vulnerability reasoning as a Directed Acyclic Graph (DAG) generation task. Unlike linear chain-of-thought (CoT), our approach explicitly maps causal dependencies to enforce structural consistency. By further introducing Reinforcement Learning with Verifiable Rewards (RLVR), we align model reasoning trace with program-intrinsic logic. Experimental results demonstrate that our framework improves the reasoning F1-score by an average of 18.9% over all the baselines. Remarkably, our 8B-parameter implementation not only outperforms existing models of comparable scale but also surpasses specialized large-scale reasoning models, including Qwen3-30B-Reasoning and GPT-OSS-20B-High. It is even competitive with state-of-the-art models like Claude-Sonnet-4.5 (75.47% vs. 76.11%), establishing new efficiency in vulnerability reasoning across model scales.
Problem

Research questions and friction points this paper is trying to address.

vulnerability reasoning
large language models
reasoning reliability
causal reasoning
code comprehension
Innovation

Methods, ideas, or system contributions that make the work stand out.

Directed Acyclic Graph (DAG)
Reinforcement Learning with Verifiable Rewards (RLVR)
vulnerability reasoning
causal reasoning
code perturbation
🔎 Similar Papers
No similar papers found.