A Practical Solution to Systematically Monitor Inconsistencies in SBOM-based Vulnerability Scanners

📅 2025-12-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
SBOM-driven vulnerability scanning (SVS) tools suffer from inconsistent results and silent failures—causing false positives or negatives—that undermine security assurance. To address this, we propose SVS-TEST, the first systematic evaluation framework for SVS tools, comprising a rigorous methodology, an open-source toolchain, and a benchmark suite featuring 16 carefully constructed SBOM samples paired with ground-truth vulnerability annotations. SVS-TEST enables reproducible, quantitative assessment of SVS tools’ capabilities, maturity, and error-handling behaviors via automated orchestration, cross-tool discrepancy analysis, and root-cause attribution of failures. Empirical evaluation across seven widely adopted SVS tools reveals substantial reliability disparities: several tools fail silently on syntactically and semantically valid SBOMs. All artifacts—including the benchmark, toolchain, and evaluation reports—have been open-sourced, and findings were responsibly disclosed to affected tool maintainers prior to publication.

Technology Category

Application Category

📝 Abstract
Software Bill of Materials (SBOM) provides new opportunities for automated vulnerability identification in software products. While the industry is adopting SBOM-based Vulnerability Scanning (SVS) to identify vulnerabilities, we increasingly observe inconsistencies and unexpected behavior, that result in false negatives and silent failures. In this work, we present the background necessary to understand the underlying complexity of SVS and introduce SVS-TEST, a method and tool to analyze the capability, maturity, and failure conditions of SVS-tools in real-world scenarios. We showcase the utility of SVS-TEST in a case study evaluating seven real-world SVS-tools using 16 precisely crafted SBOMs and their respective ground truth. Our results unveil significant differences in the reliability and error handling of SVS-tools; multiple SVS-tools silently fail on valid input SBOMs, creating a false sense of security. We conclude our work by highlighting implications for researchers and practitioners, including how organizations and developers of SVS-tools can utilize SVS-TEST to monitor SVS capability and maturity. All results and research artifacts are made publicly available and all findings were disclosed to the SVS-tool developers ahead of time.
Problem

Research questions and friction points this paper is trying to address.

Detects inconsistencies in SBOM-based vulnerability scanners' reliability and error handling
Addresses false negatives and silent failures in automated vulnerability identification tools
Evaluates capability and maturity of vulnerability scanners using real-world SBOM scenarios
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces SVS-TEST method to analyze vulnerability scanner capabilities
Uses crafted SBOMs to evaluate real-world SVS-tools for reliability
Provides public tool for monitoring scanner maturity and failure conditions
🔎 Similar Papers
No similar papers found.