What Comes After Harm? Mapping Reparative Actions in AI through Justice Frameworks

📅 2025-06-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current AI societal harm governance emphasizes identification and auditing but critically neglects post-harm remediation practices. Method: We develop the first taxonomy of restorative actions for AI harms, grounded in corrective, procedural, and restorative justice theories, categorizing objectives into four types: acknowledgment of harm, attribution of responsibility, redress, and systemic transformation; we empirically code and thematically analyze 1,060 real-world AI incidents from an AI accident database. Results: Only 12% of incidents involved responsibility attribution, and fewer than 5% triggered institutional reform; restorative efforts are overwhelmingly symbolic and confined to early-stage responses. We establish a benchmark for evaluating restorative actions, identifying two core deficits: accountability gaps and absence of structural reform. This work provides a theoretical framework, methodological toolkit, and policy pathways toward a responsible, restorable AI governance ecosystem.

Technology Category

Application Category

📝 Abstract
As Artificial Intelligence (AI) systems are integrated into more aspects of society, they offer new capabilities but also cause a range of harms that are drawing increasing scrutiny. A large body of work in the Responsible AI community has focused on identifying and auditing these harms. However, much less is understood about what happens after harm occurs: what constitutes reparation, who initiates it, and how effective these reparations are. In this paper, we develop a taxonomy of AI harm reparation based on a thematic analysis of real-world incidents. The taxonomy organizes reparative actions into four overarching goals: acknowledging harm, attributing responsibility, providing remedies, and enabling systemic change. We apply this framework to a dataset of 1,060 AI-related incidents, analyzing the prevalence of each action and the distribution of stakeholder involvement. Our findings show that reparation efforts are concentrated in early, symbolic stages, with limited actions toward accountability or structural reform. Drawing on theories of justice, we argue that existing responses fall short of delivering meaningful redress. This work contributes a foundation for advancing more accountable and reparative approaches to Responsible AI.
Problem

Research questions and friction points this paper is trying to address.

Understanding reparation for AI harms and its effectiveness
Developing a taxonomy of AI harm reparation actions
Analyzing stakeholder involvement in AI reparation efforts
Innovation

Methods, ideas, or system contributions that make the work stand out.

Develop taxonomy of AI harm reparation
Analyze 1,060 AI incidents using justice frameworks
Identify gaps in accountability and systemic reform
🔎 Similar Papers
No similar papers found.
S
Sijia Xiao
Human-Computer Interaction Institute, Carnegie Mellon University
H
Haodi Zou
Department of Computer Science and Engineering, University of California, San Diego
A
Alice Qian Zhang
Human-Computer Interaction Institute, Carnegie Mellon University
D
Deepak Kumar
Department of Computer Science and Engineering, University of California, San Diego
H
Hong Shen
Human-Computer Interaction Institute, Carnegie Mellon University
Jason Hong
Jason Hong
Carnegie Mellon University
Human-Computer InteractionHCIUsable Privacy and SecurityMobile computingSocial Computing
Motahhare Eslami
Motahhare Eslami
Carnegie Mellon University
Human-Computer InteractionSocial ComputingData Mining