🤖 AI Summary
Current AI societal harm governance emphasizes identification and auditing but critically neglects post-harm remediation practices. Method: We develop the first taxonomy of restorative actions for AI harms, grounded in corrective, procedural, and restorative justice theories, categorizing objectives into four types: acknowledgment of harm, attribution of responsibility, redress, and systemic transformation; we empirically code and thematically analyze 1,060 real-world AI incidents from an AI accident database. Results: Only 12% of incidents involved responsibility attribution, and fewer than 5% triggered institutional reform; restorative efforts are overwhelmingly symbolic and confined to early-stage responses. We establish a benchmark for evaluating restorative actions, identifying two core deficits: accountability gaps and absence of structural reform. This work provides a theoretical framework, methodological toolkit, and policy pathways toward a responsible, restorable AI governance ecosystem.
📝 Abstract
As Artificial Intelligence (AI) systems are integrated into more aspects of society, they offer new capabilities but also cause a range of harms that are drawing increasing scrutiny. A large body of work in the Responsible AI community has focused on identifying and auditing these harms. However, much less is understood about what happens after harm occurs: what constitutes reparation, who initiates it, and how effective these reparations are. In this paper, we develop a taxonomy of AI harm reparation based on a thematic analysis of real-world incidents. The taxonomy organizes reparative actions into four overarching goals: acknowledging harm, attributing responsibility, providing remedies, and enabling systemic change. We apply this framework to a dataset of 1,060 AI-related incidents, analyzing the prevalence of each action and the distribution of stakeholder involvement. Our findings show that reparation efforts are concentrated in early, symbolic stages, with limited actions toward accountability or structural reform. Drawing on theories of justice, we argue that existing responses fall short of delivering meaningful redress. This work contributes a foundation for advancing more accountable and reparative approaches to Responsible AI.