🤖 AI Summary
A systematic understanding of security threats across the AI supply chain remains lacking, hindering the design of effective mitigation strategies. Method: We construct the first large-scale, empirically grounded dataset of 313,000 security-related discussions crawled from Hugging Face and GitHub. We propose a fine-grained, dual-dimensional taxonomy—comprising 32 security issue categories and 24 solution types—spanning four layers: system, tool, model, and data. Our methodology integrates keyword-based filtering with a fine-tuned distilBERT pipeline for security discussion identification, coupled with hybrid qualitative analysis combining large-scale web crawling, automated cleaning, topic modeling, and manual coding. Contribution/Results: We find pervasive solution gaps—particularly at the model and data layers—driven primarily by dependency complexity and component opaqueness. This work delivers the first empirically validated classification framework and actionable insights to advance AI security governance.
📝 Abstract
The rapid growth of Artificial Intelligence (AI) models and applications has led to an increasingly complex security landscape. Developers of AI projects must contend not only with traditional software supply chain issues but also with novel, AI-specific security threats. However, little is known about what security issues are commonly encountered and how they are resolved in practice. This gap hinders the development of effective security measures for each component of the AI supply chain. We bridge this gap by conducting an empirical investigation of developer-reported issues and solutions, based on discussions from Hugging Face and GitHub. To identify security-related discussions, we develop a pipeline that combines keyword matching with an optimal fine-tuned distilBERT classifier, which achieved the best performance in our extensive comparison of various deep learning and large language models. This pipeline produces a dataset of 312,868 security discussions, providing insights into the security reporting practices of AI applications and projects. We conduct a thematic analysis of 753 posts sampled from our dataset and uncover a fine-grained taxonomy of 32 security issues and 24 solutions across four themes: (1) System and Software, (2) External Tools and Ecosystem, (3) Model, and (4) Data. We reveal that many security issues arise from the complex dependencies and black-box nature of AI components. Notably, challenges related to Models and Data often lack concrete solutions. Our insights can offer evidence-based guidance for developers and researchers to address real-world security threats across the AI supply chain.