Attesting LLM Pipelines: Enforcing Verifiable Training and Release Claims

📅 2026-03-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the security risks in large language model (LLM) supply chains arising from the lack of cryptographic binding between training and release attestations, which can enable dependency tampering, provenance spoofing, and backdoored models. The paper proposes the first end-to-end governance framework that integrates verifiable attestations with automated gating policies, enforcing validation of attestation evidence before any component enters a trusted environment. The framework mandates secure loading, static scanning, and default-safe deployment constraints, while supporting pluggable integration of runtime dynamic signals to reduce uncertainty. By leveraging cryptographic proofs, standardized attestation formats, and policy-mapping mechanisms, the approach enables high-coverage security decisions in representative scenarios, offering a systematic and practical solution for securing LLM supply chains.
📝 Abstract
Modern Large Language Model (LLM) systems are assembled from third-party artifacts such as pre-trained weights, fine-tuning adapters, datasets, dependency packages, and container images, fetched through automated pipelines. This speed comes with supply-chain risks, including compromised dependencies, malicious hub artifacts, unsafe deserialization, forged provenance, and backdoored models. A core gap is that training and release claims (e.g., data and code lineage, build environment, and security scanning results) are rarely cryptographically bound to the artifacts they describe, making enforcement inconsistent across teams and stages. We propose an attestation-aware promotion gate: before an artifact is admitted into trusted environments (training, fine-tuning, deployment), the gate verifies claim evidence, enforces safe loading and static scanning policies, and applies secure-by-default deployment constraints. When organizations operate runtime security tooling, the same gate can optionally ingest standardized dynamic signals via plugins to reduce uncertainty for high-risk artifacts. We outline a practical claims-to-controls mapping and an evaluation blueprint using representative supply-chain scenarios and operational metrics (coverage and decisions), charting a path toward a full research paper.
Problem

Research questions and friction points this paper is trying to address.

LLM supply chain
attestation
training claims
release claims
artifact provenance
Innovation

Methods, ideas, or system contributions that make the work stand out.

attestation
LLM supply chain
verifiable claims
promotion gate
secure deployment
🔎 Similar Papers