🤖 AI Summary
Deep learning–based vulnerability detectors suffer from poor generalizability and susceptibility to spurious correlations. Method: We propose the first perturbation-driven evaluation framework, introducing a unified code feature representation and two controllable perturbation types—Feature-Preserving Perturbations (FPP) and Feature-Eliminating Perturbations (FEP)—to systematically isolate and quantify the influence of genuine vulnerability features versus spurious ones. Leveraging syntax/semantic analysis and interpretability techniques from graph neural networks and sequence models, our framework enables fine-grained attribution of model predictions. Results: Experiments across five state-of-the-art detectors reveal alarming fragility: only 2% FPP induces misclassifications, while erroneous predictions persist even under 84% FEP; graph-based models exhibit the largest recall drop (up to 29%), attributable to overreliance on spurious features. This work establishes a novel paradigm for robustness assessment and trustworthiness validation of vulnerability detection models.
📝 Abstract
Recent research has revealed that the reported results of an emerging body of DL-based techniques for detecting software vulnerabilities are not reproducible, either across different datasets or on unseen samples. This paper aims to provide the foundation for properly evaluating the research in this domain. We do so by analyzing prior work and existing vulnerability datasets for the syntactic and semantic features of code that contribute to vulnerability, as well as features that falsely correlate with vulnerability. We provide a novel, uniform representation to capture both sets of features, and use this representation to detect the presence of both vulnerability and spurious features in code. To this end, we design two types of code perturbations: feature preserving perturbations (FPP) ensure that the vulnerability feature remains in a given code sample, while feature eliminating perturbations (FEP) eliminate the feature from the code sample. These perturbations aim to measure the influence of spurious and vulnerability features on the predictions of a given vulnerability detection solution. To evaluate how the two classes of perturbations influence predictions, we conducted a large-scale empirical study on five state-of-the-art DL-based vulnerability detectors. Our study shows that, for vulnerability features, only ~2% of FPPs yield the undesirable effect of a prediction changing among the five detectors on average. However, on average, ~84% of FEPs yield the undesirable effect of retaining the vulnerability predictions. For spurious features, we observed that FPPs yielded a drop in recall up to 29% for graph-based detectors. We present the reasons underlying these results and suggest strategies for improving DNN-based vulnerability detectors. We provide our perturbation-based evaluation framework as a public resource to enable independent future evaluation of vulnerability detectors.