Unequal Uncertainty: Rethinking Algorithmic Interventions for Mitigating Discrimination from AI

📅 2025-08-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Algorithmic interventions triggered by AI prediction uncertainty—such as selective abstention and selective friction—may engender discriminatory outcomes in high-stakes domains like credit approval and content moderation, exacerbating systemic disadvantages for marginalized groups. This paper pioneers an integrated socio-technical and legal compliance analysis, employing case studies and normative evaluation to demonstrate how ostensibly neutral uncertainty-based interventions can produce de facto discrimination and potentially violate anti-discrimination law. We find that selective abstention obscures decisional bias and undermines accountability, whereas selective friction—e.g., flagging uncertain cases for human review with interpretability support—enhances transparency and fosters deliberative, fairer collaborative judgment. Accordingly, we propose a friction-centered fairness-enhancement framework, offering a legally grounded and operationally viable pathway for responsible AI governance. (149 words)

Technology Category

Application Category

📝 Abstract
Uncertainty in artificial intelligence (AI) predictions poses urgent legal and ethical challenges for AI-assisted decision-making. We examine two algorithmic interventions that act as guardrails for human-AI collaboration: selective abstention, which withholds high-uncertainty predictions from human decision-makers, and selective friction, which delivers those predictions together with salient warnings or disclosures that slow the decision process. Research has shown that selective abstention based on uncertainty can inadvertently exacerbate disparities and disadvantage under-represented groups that disproportionately receive uncertain predictions. In this paper, we provide the first integrated socio-technical and legal analysis of uncertainty-based algorithmic interventions. Through two case studies, AI-assisted consumer credit decisions and AI-assisted content moderation, we demonstrate how the seemingly neutral use of uncertainty thresholds can trigger discriminatory impacts. We argue that, although both interventions pose risks of unlawful discrimination under UK law, selective frictions offer a promising pathway toward fairer and more accountable AI-assisted decision-making by preserving transparency and encouraging more cautious human judgment.
Problem

Research questions and friction points this paper is trying to address.

Addressing discrimination from AI uncertainty in decision-making
Analyzing selective abstention and friction interventions
Evaluating legal risks of uncertainty-based algorithmic interventions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Selective abstention withholds uncertain AI predictions
Selective friction adds warnings to uncertain predictions
Integrated socio-technical and legal analysis conducted
🔎 Similar Papers
No similar papers found.