It's only fair when I think it's fair: How Gender Bias Alignment Undermines Distributive Fairness in Human-AI Collaboration

📅 2025-05-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates how alignment between human and AI gender biases affects perceived fairness and adoption of AI recommendations. Using a 2×2 between-subjects experimental design, we integrate behavioral measures, validated fairness perception scales, and recommendation reliance assessments. Results show that when an AI’s gender bias aligns with users’ preexisting biases, users significantly overestimate its fairness—even when the system satisfies formal fairness criteria—and exhibit greater reliance on its recommendations. Conversely, formally fair AI systems exhibiting bias misalignment are systematically disregarded. This is the first empirical demonstration that “bias alignment” distorts human fairness judgments, challenging the prevailing assumption that formal fairness suffices for equitable AI design. We argue that fairness must jointly optimize algorithmic objectivity and human cognitive alignment. These findings provide a foundational cognitive mechanism for designing trustworthy human-AI collaboration systems.

Technology Category

Application Category

📝 Abstract
Human-AI collaboration is increasingly relevant in consequential areas where AI recommendations support human discretion. However, human-AI teams' effectiveness, capability, and fairness highly depend on human perceptions of AI. Positive fairness perceptions have been shown to foster trust and acceptance of AI recommendations. Yet, work on confirmation bias highlights that humans selectively adhere to AI recommendations that align with their expectations and beliefs -- despite not being necessarily correct or fair. This raises the question whether confirmation bias also transfers to the alignment of gender bias between human and AI decisions. In our study, we examine how gender bias alignment influences fairness perceptions and reliance. The results of a 2x2 between-subject study highlight the connection between gender bias alignment, fairness perceptions, and reliance, demonstrating that merely constructing a ``formally fair'' AI system is insufficient for optimal human-AI collaboration; ultimately, AI recommendations will likely be overridden if biases do not align.
Problem

Research questions and friction points this paper is trying to address.

Examining gender bias alignment impact on fairness perceptions
Exploring how bias alignment affects human-AI reliance
Assessing insufficiency of formally fair AI systems
Innovation

Methods, ideas, or system contributions that make the work stand out.

Examines gender bias alignment in human-AI collaboration
Links bias alignment to fairness perceptions and reliance
Highlights insufficiency of formally fair AI systems
🔎 Similar Papers
No similar papers found.
Domenique Zipperling
Domenique Zipperling
PhD Universitiy of Bayreuth
Human-AI TeamsXAIFairness in AI
Luca Deck
Luca Deck
Universität Bayreuth
Algorithmic FairnessExplainable AIEthical AI
J
Julia Lanzl
University of Hohenheim, Hohenheim, Germany; Fraunhofer FIT, Augsburg, Germany
N
Niklas Kuhl
University of Bayreuth & Fraunhofer FIT, Bayreuth, Germany