Measuring the right thing: justifying metrics in AI impact assessments

📅 2025-04-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
AI impact assessments frequently encounter challenges in conceptualizing, operationalizing, and justifying ethical and societal value metrics. This paper proposes a two-stage “concept–indicator” methodology: first, applying conceptual engineering—grounded in normative ethical theories (e.g., Rawlsian justice theory)—to rigorously clarify and define core values such as fairness; second, systematically selecting and adapting empirically measurable, traceable indicators aligned with these clarified concepts. This approach constitutes the first systematic integration of conceptual engineering into AI assessment frameworks, explicitly distinguishing epistemic and normative justification requirements at the conceptual level from empirical validity and feasibility criteria at the indicator level—thereby demystifying the “black box” of ethical metrics. The resulting concept-driven indicator selection paradigm significantly enhances assessment transparency, defensibility, and value alignment, advancing AI governance from technical compliance toward substantive value embedding.

Technology Category

Application Category

📝 Abstract
AI Impact Assessments are only as good as the measures used to assess the impact of these systems. It is therefore paramount that we can justify our choice of metrics in these assessments, especially for difficult to quantify ethical and social values. We present a two-step approach to ensure metrics are properly motivated. First, a conception needs to be spelled out (e.g. Rawlsian fairness or fairness as solidarity) and then a metric can be fitted to that conception. Both steps require separate justifications, as conceptions can be judged on how well they fit with the function of, for example, fairness. We argue that conceptual engineering offers helpful tools for this step. Second, metrics need to be fitted to a conception. We illustrate this process through an examination of competing fairness metrics to illustrate that here the additional content that a conception offers helps us justify the choice for a specific metric. We thus advocate that impact assessments are not only clear on their metrics, but also on the conceptions that motivate those metrics.
Problem

Research questions and friction points this paper is trying to address.

Justifying metrics in AI impact assessments
Ensuring metrics align with ethical values
Clarifying conceptions behind fairness metrics
Innovation

Methods, ideas, or system contributions that make the work stand out.

Two-step approach for metric justification
Conceptual engineering tools for fairness
Conception-based metric fitting process
🔎 Similar Papers
2024-03-13Artificial Intelligence ReviewCitations: 0