MulVul: Retrieval-augmented Multi-Agent Code Vulnerability Detection via Cross-Model Prompt Evolution

📅 2026-01-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitations of current large language models in real-world vulnerability detection, which stem from the heterogeneity of vulnerability patterns and the poor scalability of handcrafted prompts. The authors propose a retrieval-augmented multi-agent framework that integrates a Router for coarse-grained classification and a Detector for fine-grained identification, working in concert to adapt to diverse vulnerability patterns through knowledge base retrieval. To mitigate self-correction bias inherent in single-model approaches, they introduce a cross-model prompt evolution mechanism that decouples prompt generation from validation. Evaluated across 130 Common Weakness Enumeration (CWE) types, the method achieves a Macro-F1 score of 34.79%, representing a 41.5% improvement over the strongest baseline. Furthermore, the prompt evolution mechanism outperforms manually crafted prompts by 51.6% in detection performance.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) struggle to automate real-world vulnerability detection due to two key limitations: the heterogeneity of vulnerability patterns undermines the effectiveness of a single unified model, and manual prompt engineering for massive weakness categories is unscalable. To address these challenges, we propose \textbf{MulVul}, a retrieval-augmented multi-agent framework designed for precise and broad-coverage vulnerability detection. MulVul adopts a coarse-to-fine strategy: a \emph{Router} agent first predicts the top-$k$ coarse categories and then forwards the input to specialized \emph{Detector} agents, which identify the exact vulnerability types. Both agents are equipped with retrieval tools to actively source evidence from vulnerability knowledge bases to mitigate hallucinations. Crucially, to automate the generation of specialized prompts, we design \emph{Cross-Model Prompt Evolution}, a prompt optimization mechanism where a generator LLM iteratively refines candidate prompts while a distinct executor LLM validates their effectiveness. This decoupling mitigates the self-correction bias inherent in single-model optimization. Evaluated on 130 CWE types, MulVul achieves 34.79\% Macro-F1, outperforming the best baseline by 41.5\%. Ablation studies validate cross-model prompt evolution, which boosts performance by 51.6\% over manual prompts by effectively handling diverse vulnerability patterns.
Problem

Research questions and friction points this paper is trying to address.

code vulnerability detection
large language models
prompt engineering
vulnerability heterogeneity
scalability
Innovation

Methods, ideas, or system contributions that make the work stand out.

retrieval-augmented
multi-agent
prompt evolution
vulnerability detection
large language models
🔎 Similar Papers
No similar papers found.