🤖 AI Summary
To address the challenge of balancing automation accuracy and expert efficiency in software vulnerability detection, this paper proposes an expert-collaborative, LLM-driven vulnerability identification framework. Methodologically, it integrates cross-domain and in-domain few-shot prompting techniques to enhance the LLM’s few-shot generalization capability for Python code CWE classification; it further introduces a novel confidence-driven expert routing mechanism that dynamically delegates low-confidence predictions to human experts, establishing a closed-loop feedback system. Experiments demonstrate substantial improvements over zero-shot baselines in classification accuracy and show significant reduction in manual analysis effort within simulated detection workflows—enabling high-accuracy, low-intervention collaborative vulnerability identification. The core contributions are: (1) a scalable few-shot generalization paradigm for code vulnerability classification, and (2) the first confidence-adaptive expert intervention strategy specifically designed for vulnerability detection.
📝 Abstract
As cyber threats become more sophisticated, rapid and accurate vulnerability detection is essential for maintaining secure systems. This study explores the use of Large Language Models (LLMs) in software vulnerability assessment by simulating the identification of Python code with known Common Weakness Enumerations (CWEs), comparing zero-shot, few-shot cross-domain, and few-shot in-domain prompting strategies. Our results indicate that while zero-shot prompting performs poorly, few-shot prompting significantly enhances classification performance, particularly when integrated with confidence-based routing strategies that improve efficiency by directing human experts to cases where model uncertainty is high, optimizing the balance between automation and expert oversight. We find that LLMs can effectively generalize across vulnerability categories with minimal examples, suggesting their potential as scalable, adaptable cybersecurity tools in simulated environments. However, challenges such as model reliability, interpretability, and adversarial robustness remain critical areas for future research. By integrating AI-driven approaches with expert-in-the-loop (EITL) decision-making, this work highlights a pathway toward more efficient and responsive cybersecurity workflows. Our findings provide a foundation for deploying AI-assisted vulnerability detection systems in both real and simulated environments that enhance operational resilience while reducing the burden on human analysts.