Measuring Perceptions of Fairness in AI Systems: The Effects of Infra-marginality

📅 2026-03-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses how infra-marginality—differences in data distributions across groups—complicates judgments of AI fairness, as conventional statistical parity metrics often fail to align with human perceptions of fairness. Through a controlled user study involving 85 participants in a hypothetical medical decision-making scenario, the authors systematically investigate how group-specific model performance and training data availability shape fairness judgments. They find that when group-wise performance is equal or unknown, participants favor outcome equality; however, when performance disparities are attributable to data imbalance, models preserving these differences are perceived as more fair. These results demonstrate that human fairness judgments are not solely based on outcome equality but are significantly influenced by beliefs about the underlying causes of disparities, thereby challenging the prevailing assumption that statistical parity should serve as the default standard for algorithmic fairness.

Technology Category

Application Category

📝 Abstract
Differences in data distributions between demographic groups, known as the problem of infra-marginality, complicate how people evaluate fairness in machine learning models. We present a user study with 85 participants in a hypothetical medical decision-making scenario to examine two treatments: group-specific model performance and training data availability. Our results show that participants did not equate fairness with simple statistical parity. When group-specific performances were equal or unavailable, participants preferred models that produced equal outcomes; when performances differed, especially in ways consistent with data imbalances, they judged models that preserved those differences as more fair. These findings highlight that fairness judgments are shaped not only by outcomes, but also by beliefs about the causes of disparities. We discuss implications for popular group fairness definitions and system design, arguing that accounting for distributional context is critical to aligning algorithmic fairness metrics with human expectations in real-world applications.
Problem

Research questions and friction points this paper is trying to address.

fairness perception
infra-marginality
algorithmic fairness
group disparities
human judgment
Innovation

Methods, ideas, or system contributions that make the work stand out.

infra-marginality
algorithmic fairness
human perception
group fairness
distributional context
🔎 Similar Papers
No similar papers found.