🤖 AI Summary
Existing reasoning-oriented large language models (LLMs) for code vulnerability detection suffer from excessive parameter counts, closed-source architectures, and poor generalization; they predominantly rely on superficial pattern matching rather than deep program state reasoning. Method: We propose the first reasoning-focused LLM (7B parameters) specialized for vulnerability discovery, replacing pattern matching with explicit program state inference. We design an agent framework enabling project-level automated detection, incorporating customized data filtering, reasoning trajectory generation and correction, and test-time optimization. Contribution/Results: Our model achieves state-of-the-art performance across Python, C/C++, and Java benchmarks—outperforming mainstream static analyzers (e.g., CodeQL) and commercial LLMs. It successfully identifies multiple zero-day vulnerabilities in real-world projects and surpasses dynamic analysis tools such as AFL++ in both precision and robustness.
📝 Abstract
We propose VulnLLM-R, the~emph{first specialized reasoning LLM} for vulnerability detection. Our key insight is that LLMs can reason about program states and analyze the potential vulnerabilities, rather than simple pattern matching. This can improve the model's generalizability and prevent learning shortcuts. However, SOTA reasoning LLMs are typically ultra-large, closed-source, or have limited performance in vulnerability detection. To address this, we propose a novel training recipe with specialized data selection, reasoning data generation, reasoning data filtering and correction, and testing-phase optimization. Using our proposed methodology, we train a reasoning model with seven billion parameters. Through extensive experiments on SOTA datasets across Python, C/C++, and Java, we show that VulnLLM-R has superior effectiveness and efficiency than SOTA static analysis tools and both open-source and commercial large reasoning models. We further conduct a detailed ablation study to validate the key designs in our training recipe. Finally, we construct an agent scaffold around our model and show that it outperforms CodeQL and AFL++ in real-world projects. Our agent further discovers a set of zero-day vulnerabilities in actively maintained repositories. This work represents a pioneering effort to enable real-world, project-level vulnerability detection using AI agents powered by specialized reasoning models. The code is available at~href{https://github.com/ucsb-mlsec/VulnLLM-R}{github}.