AI Bill of Materials and Beyond: Systematizing Security Assurance through the AI Risk Scanning (AIRS) Framework

📅 2025-11-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current AI security assurance is fragmented across software supply chains, adversarial attacks, and governance documentation, lacking verifiable, machine-readable safety evidence. Method: This paper introduces the AI Risk Scanning (AIRS) framework—the first to extend Software Bill of Materials (SBOM) principles to AI systems—integrating MITRE ATLAS threat modeling to construct a verifiable evidence chain spanning model integrity, serialization security, and runtime behavior. It employs an AI-specific Bill of Materials (AIBOM) metadata model, OPAL-based formal verification, and controllable runtime probing, while maintaining compatibility with SPDX 3.0 and CycloneDX 1.6 standards. Contribution/Results: AIRS enables policy enforcement, sharded hash validation, and backdoor/poisoning detection for quantized models (e.g., GPT-OSS-20B). Experimental evaluation demonstrates its capacity to generate structured, auditable safety evidence, significantly enhancing verifiability and trustworthiness across the AI supply chain.

Technology Category

Application Category

📝 Abstract
Assurance for artificial intelligence (AI) systems remains fragmented across software supply-chain security, adversarial machine learning, and governance documentation. Existing transparency mechanisms - including Model Cards, Datasheets, and Software Bills of Materials (SBOMs) - advance provenance reporting but rarely provide verifiable, machine-readable evidence of model security. This paper introduces the AI Risk Scanning (AIRS) Framework, a threat-model-based, evidence-generating framework designed to operationalize AI assurance. The AIRS Framework evolved through three progressive pilot studies - Smurf (AIBOM schema design), OPAL (operational validation), and Pilot C (AIRS) - that reframed AI documentation from descriptive disclosure toward measurable, evidence-bound verification. The framework aligns its assurance fields to the MITRE ATLAS adversarial ML taxonomy and automatically produces structured artifacts capturing model integrity, packaging and serialization safety, structural adapters, and runtime behaviors. Currently, the AIRS Framework is scoped to provide model-level assurances for LLMs, but it could be expanded to include other modalities and cover system-level threats (e.g. application-layer abuses, tool-calling). A proof-of-concept on a quantized GPT-OSS-20B model demonstrates enforcement of safe loader policies, per-shard hash verification, and contamination and backdoor probes executed under controlled runtime conditions. Comparative analysis with SBOM standards of SPDX 3.0 and CycloneDX 1.6 reveals alignment on identity and evaluation metadata, but identifies critical gaps in representing AI-specific assurance fields. The AIRS Framework thus extends SBOM practice to the AI domain by coupling threat modeling with automated, auditable evidence generation, providing a principled foundation for standardized, trustworthy, and machine-verifiable AI risk documentation.
Problem

Research questions and friction points this paper is trying to address.

Addressing fragmented AI security across supply-chain and adversarial threats
Providing verifiable machine-readable evidence for model security assurance
Extending SBOM standards to cover AI-specific risk documentation gaps
Innovation

Methods, ideas, or system contributions that make the work stand out.

AIRS Framework generates machine-readable evidence for AI security
Aligns AI assurance fields with MITRE ATLAS adversarial ML taxonomy
Extends SBOM practice by coupling threat modeling with automated verification
🔎 Similar Papers
No similar papers found.
S
Samuel Nathanson
Johns Hopkins University Applied Physics Laboratory (APL)
Alexander Lee
Alexander Lee
Professor of Political Science, University of Rochester
C
Catherine Chen Kieffer
Johns Hopkins University Applied Physics Laboratory (APL)
J
Jared Junkin
Johns Hopkins University Applied Physics Laboratory (APL)
J
Jessica Ye
Johns Hopkins University Applied Physics Laboratory (APL)
A
Amir Saeed
Johns Hopkins University Applied Physics Laboratory (APL)
M
Melanie Lockhart
Johns Hopkins University Applied Physics Laboratory (APL)
R
Russ Fink
Johns Hopkins University Applied Physics Laboratory (APL)
Elisha Peterson
Elisha Peterson
Johns Hopkins University Applied Physics Laboratory (APL)
L
Lanier Watkins
Johns Hopkins University Applied Physics Laboratory (APL)