AI Safety in the Eyes of the Downstream Developer: A First Look at Concerns, Practices, and Challenges

📅 2025-03-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates the practice gap among downstream developers in addressing pre-trained model (PTM) security issues during AI software development. Using a mixed-methods approach—18 in-depth interviews, 86 practitioner surveys, and content analysis of 874 AI incident reports—we systematically uncover a pronounced “awareness–action gap”: developers exhibit high security awareness yet demonstrate low practical capability, especially during PTM preparation and selection. We identify, for the first time, three structural bottlenecks: absence of actionable security guidelines, inadequate documentation support, and domain-knowledge fragmentation. Based on these findings, we propose a four-tier collaborative governance framework targeting model providers, developers, researchers, and policymakers. This framework offers concrete, implementable pathways to bridge the AI security practice gap, advancing the field from awareness-driven efforts toward institutionalized, end-to-end security engineering practices across the AI lifecycle.

Technology Category

Application Category

📝 Abstract
Pre-trained models (PTMs) have become a cornerstone of AI-based software, allowing for rapid integration and development with minimal training overhead. However, their adoption also introduces unique safety challenges, such as data leakage and biased outputs, that demand rigorous handling by downstream developers. While previous research has proposed taxonomies of AI safety concerns and various mitigation strategies, how downstream developers address these issues remains unexplored. This study investigates downstream developers' concerns, practices and perceived challenges regarding AI safety issues during AI-based software development. To achieve this, we conducted a mixed-method study, including interviews with 18 participants, a survey of 86 practitioners, and an analysis of 874 AI incidents from the AI Incident Database. Our results reveal that while developers generally demonstrate strong awareness of AI safety concerns, their practices, especially during the preparation and PTM selection phases, are often inadequate. The lack of concrete guidelines and policies leads to significant variability in the comprehensiveness of their safety approaches throughout the development lifecycle, with additional challenges such as poor documentation and knowledge gaps, further impeding effective implementation. Based on our findings, we offer suggestions for PTM developers, AI-based software developers, researchers, and policy makers to enhance the integration of AI safety measures.
Problem

Research questions and friction points this paper is trying to address.

Investigates downstream developers' AI safety concerns and practices
Examines challenges in AI-based software development safety
Assesses gaps in guidelines for AI safety implementation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Mixed-method study with interviews and surveys
Analysis of AI Incident Database records
Guidelines for AI safety integration
🔎 Similar Papers
No similar papers found.
H
Haoyu Gao
The University of Melbourne, Melbourne, Victoria, Australia
M
Mansooreh Zahedi
The University of Melbourne, Melbourne, Victoria, Australia
Wenxin Jiang
Wenxin Jiang
Ph.D. student@ECE, Purdue University
Software EngineeringSoftware Supply ChainCybersecurityMachine Learning
H
Hong Yi Lin
The University of Melbourne, Melbourne, Victoria, Australia
J
James Davis
Purdue University, West Lafayette, IN, USA
Christoph Treude
Christoph Treude
Associate Professor of Computer Science, Singapore Management University
Software EngineeringEmpirical Software EngineeringHuman-AI InteractionAI for ScienceAI4SE