Understanding the Supply Chain and Risks of Large Language Model Applications

📅 2025-07-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing risk assessments of LLM application supply chains focus narrowly on isolated layers—models or datasets—lacking systematic analysis of cross-layer dependencies (model–dataset–library–application). Method: We construct the first comprehensive four-layer LLM supply chain graph, integrating 3,859 applications, >100K models, 2,474 datasets, and 9,862 libraries; propose the first security benchmark dataset for LLM supply chains; and achieve holistic mapping of fine-tuning paths, data reuse, and library dependencies. Through multi-source dependency analysis and vulnerability correlation, we identify 1,555 security risks, exposing critical hazards stemming from deeply nested dependencies. Contribution/Results: This work fills a fundamental gap in systematic LLM supply chain risk assessment, providing empirically grounded insights and actionable guidelines for building trustworthy LLM systems.

Technology Category

Application Category

📝 Abstract
The rise of Large Language Models (LLMs) has led to the widespread deployment of LLM-based systems across diverse domains. As these systems proliferate, understanding the risks associated with their complex supply chains is increasingly important. LLM-based systems are not standalone as they rely on interconnected supply chains involving pretrained models, third-party libraries, datasets, and infrastructure. Yet, most risk assessments narrowly focus on model or data level, overlooking broader supply chain vulnerabilities. While recent studies have begun to address LLM supply chain risks, there remains a lack of benchmarks for systematic research. To address this gap, we introduce the first comprehensive dataset for analyzing and benchmarking LLM supply chain security. We collect 3,859 real-world LLM applications and perform interdependency analysis, identifying 109,211 models, 2,474 datasets, and 9,862 libraries. We extract model fine-tuning paths, dataset reuse, and library reliance, mapping the ecosystem's structure. To evaluate security, we gather 1,555 risk-related issues-50 for applications, 325 for models, 18 for datasets, and 1,229 for libraries from public vulnerability databases. Using this dataset, we empirically analyze component dependencies and risks. Our findings reveal deeply nested dependencies in LLM applications and significant vulnerabilities across the supply chain, underscoring the need for comprehensive security analysis. We conclude with practical recommendations to guide researchers and developers toward safer, more trustworthy LLM-enabled systems.
Problem

Research questions and friction points this paper is trying to address.

Analyzing risks in LLM supply chains
Identifying vulnerabilities in model dependencies
Lack of benchmarks for security research
Innovation

Methods, ideas, or system contributions that make the work stand out.

Comprehensive dataset for LLM supply chain security
Interdependency analysis of models, datasets, libraries
Empirical analysis of component dependencies and risks
🔎 Similar Papers
No similar papers found.