🤖 AI Summary
Cybersecurity faces dual challenges: an explosion in vulnerability disclosures (over 25,000 new CVEs in 2024) and the limited timeliness and hallucination-prone nature of large language models (LLMs). To address these, we propose a retrieval-augmented, provenance-verified framework for real-time vulnerability analysis. Our method dynamically scrapes and structurally parses authoritative sources (e.g., NVD, CWE) to enable low-latency knowledge updates. We introduce a novel self-critical provenance verification mechanism that jointly tracks evidence chains and performs response self-assessment, ensuring full auditability and traceable reasoning. Evaluated on exploitation and mitigation strategy generation, our approach achieves 99% and 97% accuracy, respectively—outperforming baseline LLMs—while substantially reducing hallucinations and omissions. This work overcomes critical bottlenecks in LLM-based vulnerability analysis concerning timeliness, reliability, and explainability.
📝 Abstract
In cybersecurity, security analysts face the challenge of mitigating newly discovered vulnerabilities in real-time, with over 300,000 Common Vulnerabilities and Exposures (CVEs) identified since 1999. The sheer volume of known vulnerabilities complicates the detection of patterns for unknown threats. While LLMs can assist, they often hallucinate and lack alignment with recent threats. Over 25,000 vulnerabilities have been identified so far in 2024, which are introduced after popular LLMs' (e.g., GPT-4) training data cutoff. This raises a major challenge of leveraging LLMs in cybersecurity, where accuracy and up-to-date information are paramount. In this work, we aim to improve the adaptation of LLMs in vulnerability analysis by mimicking how analysts perform such tasks. We propose ProveRAG, an LLM-powered system designed to assist in rapidly analyzing CVEs with automated retrieval augmentation of web data while self-evaluating its responses with verifiable evidence. ProveRAG incorporates a self-critique mechanism to help alleviate omission and hallucination common in the output of LLMs applied in cybersecurity applications. The system cross-references data from verifiable sources (NVD and CWE), giving analysts confidence in the actionable insights provided. Our results indicate that ProveRAG excels in delivering verifiable evidence to the user with over 99% and 97% accuracy in exploitation and mitigation strategies, respectively. This system outperforms direct prompting and chunking retrieval in vulnerability analysis by overcoming temporal and context-window limitations. ProveRAG guides analysts to secure their systems more effectively while documenting the process for future audits.