🤖 AI Summary
Current AI ethics assessments are fragmented, focusing predominantly on fairness, transparency, privacy, and trust at the model or output level while neglecting inter-component system interactions, real-world harm contexts, and causal harm propagation pathways—resulting in evaluations disconnected from actual risk scenarios and lacking actionable thresholds. Method: Through a scoping review synthesizing nearly 800 ethics metrics, this study constructs the first four-dimensional relational framework—“System Components–Attributes–Risks–Harms”—to systematically map ethical assessment dimensions. Contribution/Results: The framework uncovers three critical gaps: insufficient system integration, weak contextual embedding, and poor actionability. It advances AI ethics evaluation from isolated metric measurement toward a systemic, traceable, and intervention-oriented paradigm—enhancing regulatory alignment and practical deployment efficacy in industry settings.
📝 Abstract
Over the past decade, an ecosystem of measures has emerged to evaluate the social and ethical implications of AI systems, largely shaped by high-level ethics principles. These measures are developed and used in fragmented ways, without adequate attention to how they are situated in AI systems. In this paper, we examine how existing measures used in the computing literature map to AI system components, attributes, hazards, and harms. Our analysis draws on a scoping review resulting in nearly 800 measures corresponding to 11 AI ethics principles. We find that most measures focus on four principles - fairness, transparency, privacy, and trust - and primarily assess model or output system components. Few measures account for interactions across system elements, and only a narrow set of hazards is typically considered for each harm type. Many measures are disconnected from where harm is experienced and lack guidance for setting meaningful thresholds. These patterns reveal how current evaluation practices remain fragmented, measuring in pieces rather than capturing how harms emerge across systems. Framing measures with respect to system attributes, hazards, and harms can strengthen regulatory oversight, support actionable practices in industry, and ground future research in systems-level understanding.