Mono: Is Your"Clean"Vulnerability Dataset Really Solvable? Exposing and Trapping Undecidable Patches and Beyond

📅 2025-06-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing vulnerability datasets suffer from erroneous security patch labels, missing contextual information, and a substantial proportion of indeterminate patches (16.7% of CVE samples), resulting in high noise levels and degraded model performance. To address these issues, we propose Mono, an LLM-driven, three-stage expert-style reasoning framework that integrates semantic-aware patch classification, iterative context modeling, and systematic root-cause diagnosis. Mono introduces the first method for identifying and quantifying indeterminate patches and designs a synergistic mechanism combining context enhancement with root-cause filtering. Experimental results demonstrate that Mono corrects 31.0% of label errors, recovers 89% of cross-procedural vulnerabilities, and improves vulnerability detection accuracy by 15%. We open-source both the Mono framework and the high-quality dataset MonoLens.

Technology Category

Application Category

📝 Abstract
The quantity and quality of vulnerability datasets are essential for developing deep learning solutions to vulnerability-related tasks. Due to the limited availability of vulnerabilities, a common approach to building such datasets is analyzing security patches in source code. However, existing security patches often suffer from inaccurate labels, insufficient contextual information, and undecidable patches that fail to clearly represent the root causes of vulnerabilities or their fixes. These issues introduce noise into the dataset, which can mislead detection models and undermine their effectiveness. To address these issues, we present mono, a novel LLM-powered framework that simulates human experts' reasoning process to construct reliable vulnerability datasets. mono introduces three key components to improve security patch datasets: (i) semantic-aware patch classification for precise vulnerability labeling, (ii) iterative contextual analysis for comprehensive code understanding, and (iii) systematic root cause analysis to identify and filter undecidable patches. Our comprehensive evaluation on the MegaVul benchmark demonstrates that mono can correct 31.0% of labeling errors, recover 89% of inter-procedural vulnerabilities, and reveals that 16.7% of CVEs contain undecidable patches. Furthermore, mono's enriched context representation improves existing models' vulnerability detection accuracy by 15%. We open source the framework mono and the dataset MonoLens in https://github.com/vul337/mono.
Problem

Research questions and friction points this paper is trying to address.

Identifying and filtering undecidable patches in vulnerability datasets
Correcting inaccurate labels in security patch datasets
Enhancing contextual understanding for vulnerability detection models
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM-powered framework for reliable datasets
Semantic-aware patch classification for precise labeling
Iterative contextual analysis for code understanding
🔎 Similar Papers
No similar papers found.
Z
Zeyu Gao
Tsinghua University
Junlin Zhou
Junlin Zhou
Associate Professor of Computer Science, Uninversity of Electronic Science and Technology of China
Recommender SystemData MiningBig Data Analyze
B
Bolun Zhang
Institute of Information Engineering, Chinese Academy of Sciences
Y
Yi He
C
Chao Zhang
Tsinghua University
Yuxin Cui
Yuxin Cui
Tsinghua university
H
Hao Wang
Tsinghua University