Perceptions of the Fairness Impacts of Multiplicity in Machine Learning

📅 2024-09-18
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates how “model multiplicity”—the existence of multiple functionally equivalent models yielding stochastic predictions—affects public perceptions of algorithmic fairness in machine learning. Method: A large-scale mixed-design survey and scenario-based experiment (N = 2,147) was conducted, employing multivariate regression and moderation analysis to examine how task risk and uncertainty moderate fairness judgments. Contribution/Results: Findings reveal that the public perceives model multiplicity primarily as a fairness threat—not merely a technical implementation issue—and that fairness perceptions are significantly moderated by task risk and epistemic uncertainty. Critically, dominant mitigation strategies—such as fixing a single model or randomizing outputs—are strongly rejected. The study provides the first empirical evidence supporting an “intentional handling” principle: transparent disclosure, interpretability, and participatory governance of multiplicity sources are necessary for upholding algorithmic fairness. These insights advance both fairness theory and responsible AI practice.

Technology Category

Application Category

📝 Abstract
Machine learning (ML) is increasingly used in high-stakes settings, yet multiplicity - the existence of multiple good models - means that some predictions are essentially arbitrary. ML researchers and philosophers posit that multiplicity poses a fairness risk, but no studies have investigated whether stakeholders agree. In this work, we conduct a survey to see how multiplicity impacts lay stakeholders' - i.e., decision subjects' - perceptions of ML fairness, and which approaches to address multiplicity they prefer. We investigate how these perceptions are modulated by task characteristics (e.g., stakes and uncertainty). Survey respondents think that multiplicity threatens the fairness of model outcomes, but not the appropriateness of using the model, even though existing work suggests the opposite. Participants are strongly against resolving multiplicity by using a single model (effectively ignoring multiplicity) or by randomizing the outcomes. Our results indicate that model developers should be intentional about dealing with multiplicity in order to maintain fairness.
Problem

Research questions and friction points this paper is trying to address.

Machine Learning
Fairness
Model Selection
Innovation

Methods, ideas, or system contributions that make the work stand out.

Public Perspective
Multi-model Selection
Fairness in Machine Learning
🔎 Similar Papers
No similar papers found.