Similar but Patched Code Considered Harmful -- The Impact of Similar but Patched Code on Recurring Vulnerability Detection and How to Remove Them

📅 2024-12-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
High false positives in vulnerability detection arise from “similar-but-patched” (SBP) code—functionally similar yet already patched variants—which severely impairs the reliability of deep learning–based detectors. Method: We propose Fixed Vulnerability Filter (FVF), a language-agnostic framework that systematically characterizes SBP interference in deep learning–based detection and introduces a novel, fine-grained code-evolution–driven SBP identification paradigm—moving beyond coarse function-signature matching. FVF integrates change-history mining, AST-difference analysis, cross-version function-level patch localization, and multi-tool ensemble filtering. Contribution/Results: We construct the first benchmark dataset of 6,827 real-world SBP functions. Evaluated on four mainstream static analyzers, FVF eliminates 65.1% false positives with zero false negatives. Empirical evaluation further reveals that four state-of-the-art deep learning models consistently fail on SBP instances. This work establishes a new benchmark and technical foundation for robustness-aware vulnerability detection and evaluation.

Technology Category

Application Category

📝 Abstract
Identifying recurring vulnerabilities is crucial for ensuring software security. Clone-based techniques, while widely used, often generate many false alarms due to the existence of similar but patched (SBP) code, which is similar to vulnerable code but is not vulnerable due to having been patched. Although the SBP code poses a great challenge to the effectiveness of existing approaches, it has not yet been well explored. In this paper, we propose a programming language agnostic framework, Fixed Vulnerability Filter (FVF), to identify and filter such SBP instances in vulnerability detection. Different from existing studies that leverage function signatures, our approach analyzes code change histories to precisely pinpoint SBPs and consequently reduce false alarms. Evaluation under practical scenarios confirms the effectiveness and precision of our approach. Remarkably, FVF identifies and filters 65.1% of false alarms from four vulnerability detection tools (i.e., ReDeBug, VUDDY, MVP, and an elementary hash-based approach) without yielding false positives. We further apply FVF to 1,081 real-world software projects and construct a real-world SBP dataset containing 6,827 SBP functions. Due to the SBP nature, the dataset can act as a strict benchmark to test the sensitivity of the vulnerability detection approach in distinguishing real vulnerabilities and SBPs. Using this dataset, we demonstrate the ineffectiveness of four state-of-the-art deep learning-based vulnerability detection approaches. Our dataset can help developers make a more realistic evaluation of vulnerability detection approaches and also paves the way for further exploration of real-world SBP scenarios.
Problem

Research questions and friction points this paper is trying to address.

Software Security
False Positives
Deep Learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

FVF
Code Modification History
False Alert Reduction
🔎 Similar Papers
No similar papers found.
Z
Zixuan Tan
Zhejiang University, Hangzhou, Zhejiang, China
Jiayuan Zhou
Jiayuan Zhou
Principal Researcher, Waterloo Research Centre, Huawei Canada
OSS VulnerabilitiesCrowdsourced Software EngineeringMining Software RepositoriesEmpirical
X
Xing Hu
Zhejiang University, Hangzhou, Zhejiang, China
Shengyi Pan
Shengyi Pan
Amazon
K
Kui Liu
Huawei, Hangzhou, Zhejiang, China
X
Xin Xia
Huawei, Hangzhou, Zhejiang, China