Securing the AI Supply Chain: What Can We Learn From Developer-Reported Security Issues and Solutions of AI Projects?

📅 2025-12-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
A systematic understanding of security threats across the AI supply chain remains lacking, hindering the design of effective mitigation strategies. Method: We construct the first large-scale, empirically grounded dataset of 313,000 security-related discussions crawled from Hugging Face and GitHub. We propose a fine-grained, dual-dimensional taxonomy—comprising 32 security issue categories and 24 solution types—spanning four layers: system, tool, model, and data. Our methodology integrates keyword-based filtering with a fine-tuned distilBERT pipeline for security discussion identification, coupled with hybrid qualitative analysis combining large-scale web crawling, automated cleaning, topic modeling, and manual coding. Contribution/Results: We find pervasive solution gaps—particularly at the model and data layers—driven primarily by dependency complexity and component opaqueness. This work delivers the first empirically validated classification framework and actionable insights to advance AI security governance.

Technology Category

Application Category

📝 Abstract
The rapid growth of Artificial Intelligence (AI) models and applications has led to an increasingly complex security landscape. Developers of AI projects must contend not only with traditional software supply chain issues but also with novel, AI-specific security threats. However, little is known about what security issues are commonly encountered and how they are resolved in practice. This gap hinders the development of effective security measures for each component of the AI supply chain. We bridge this gap by conducting an empirical investigation of developer-reported issues and solutions, based on discussions from Hugging Face and GitHub. To identify security-related discussions, we develop a pipeline that combines keyword matching with an optimal fine-tuned distilBERT classifier, which achieved the best performance in our extensive comparison of various deep learning and large language models. This pipeline produces a dataset of 312,868 security discussions, providing insights into the security reporting practices of AI applications and projects. We conduct a thematic analysis of 753 posts sampled from our dataset and uncover a fine-grained taxonomy of 32 security issues and 24 solutions across four themes: (1) System and Software, (2) External Tools and Ecosystem, (3) Model, and (4) Data. We reveal that many security issues arise from the complex dependencies and black-box nature of AI components. Notably, challenges related to Models and Data often lack concrete solutions. Our insights can offer evidence-based guidance for developers and researchers to address real-world security threats across the AI supply chain.
Problem

Research questions and friction points this paper is trying to address.

Identifies common security issues in AI supply chain from developer reports.
Analyzes solutions for AI-specific threats across system, tools, model, and data.
Addresses gaps in securing AI components due to dependencies and black-box nature.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Empirical study of developer-reported security issues and solutions
Pipeline combining keyword matching with fine-tuned distilBERT classifier
Thematic analysis revealing taxonomy of 32 issues and 24 solutions
🔎 Similar Papers
No similar papers found.
T
The Anh Nguyen
School of Computer Science and Information Technology, Adelaide University
T
Triet Huynh Minh Le
School of Computer Science and Information Technology, Adelaide University
M. Ali Babar
M. Ali Babar
Professor of Software Engineering, The University of Adelaide, Australia
Software Security & PrivacyBig Data Platforms & ArchitecturesEmpirical Software EngineeringSoftware Architecture