R2Vul: Learning to Reason about Software Vulnerabilities with Reinforcement Learning and Structured Reasoning Distillation

📅 2025-04-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the unreliable reasoning of large language models (LLMs) in software vulnerability detection (SVD) and the lack of verifiability and actionability in security assessments, this paper proposes a novel framework integrating AI feedback–based reinforcement learning (RLAIF) with structured reasoning distillation. It explicitly models the critical distinction between genuine vulnerabilities and plausible-but-false positives—the first such effort—and introduces the first large-scale, multilingual structured preference dataset for SVD. Leveraging this framework, a lightweight 1.5B-parameter model, after fine-tuning, consistently outperforms static application security testing (SAST) tools, chain-of-thought (CoT) prompting, and classification baselines across five programming languages. Moreover, it demonstrates significantly improved out-of-distribution generalization to unseen vulnerability patterns. These results validate the feasibility of deploying compact models capable of generating trustworthy, verifiable, and actionable security reasoning—bridging the gap between LLM capabilities and practical SVD requirements.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) have shown promising performance in software vulnerability detection (SVD), yet their reasoning capabilities remain unreliable. Existing approaches relying on chain-of-thought (CoT) struggle to provide relevant and actionable security assessments. Additionally, effective SVD requires not only generating coherent reasoning but also differentiating between well-founded and misleading yet plausible security assessments, an aspect overlooked in prior work. To this end, we introduce R2Vul, a novel approach that distills structured reasoning into small LLMs using reinforcement learning from AI feedback (RLAIF). Through RLAIF, R2Vul enables LLMs to produce structured, security-aware reasoning that is actionable and reliable while explicitly learning to distinguish valid assessments from misleading ones. We evaluate R2Vul across five languages against SAST tools, CoT, instruction tuning, and classification-based baselines. Our results show that R2Vul with structured reasoning distillation enables a 1.5B student LLM to rival larger models while improving generalization to out-of-distribution vulnerabilities. Beyond model improvements, we contribute a large-scale, multilingual preference dataset featuring structured reasoning to support future research in SVD.
Problem

Research questions and friction points this paper is trying to address.

Improving unreliable reasoning in LLMs for vulnerability detection
Distinguishing valid from misleading security assessments effectively
Enhancing small LLMs' performance to rival larger models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses reinforcement learning from AI feedback
Distills structured reasoning into small LLMs
Explicitly distinguishes valid from misleading assessments
🔎 Similar Papers
No similar papers found.