🤖 AI Summary
This study investigates the risk evolution mechanisms and cross-stakeholder harm propagation pathways of generative AI in real-world settings. Method: Drawing on empirical analysis of 499 publicly documented incidents, we construct the first domain-specific failure taxonomy for generative AI and propose a three-dimensional “risk–failure–stakeholder” mapping framework. Using systematic literature review, incident coding, and co-occurrence statistics, we quantify prevalent failure patterns, associated harm types, and affected stakeholder distributions. Results: We reveal a defining characteristic distinguishing generative AI from traditional AI: over 70% of incidents originate in usage contexts, yet harms substantially externalize to non-primary users—including the general public, environment, and vulnerable populations. The study establishes the primacy of non-technical governance interventions and provides empirically grounded foundations for policy design, developer accountability frameworks, and public risk literacy initiatives.
📝 Abstract
Due to its general-purpose nature, Generative AI is applied in an ever-growing set of domains and tasks, leading to an expanding set of risks of harm impacting people, communities, society, and the environment. These risks may arise due to failures during the design and development of the technology, as well as during its release, deployment, or downstream usages and appropriations of its outputs. In this paper, building on prior taxonomies of AI risks, harms, and failures, we construct a taxonomy specifically for Generative AI failures and map them to the harms they precipitate. Through a systematic analysis of 499 publicly reported incidents, we describe what harms are reported, how they arose, and who they impact. We report the prevalence of each type of harm, underlying failure mode, and harmed stakeholder, as well as their common co-occurrences. We find that most reported incidents are caused by use-related issues but bring harm to parties beyond the end user(s) of the Generative AI system at fault, and that the landscape of Generative AI harms is distinct from that of traditional AI. Our work offers actionable insights to policymakers, developers, and Generative AI users. In particular, we call for the prioritization of non-technical risk and harm mitigation strategies, including public disclosures and education and careful regulatory stances.