Biased Minds Meet Biased AI: How Class Imbalance Shapes Appropriate Reliance and Interacts with Human Base Rate Neglect

📅 2025-11-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates the interaction between AI-induced class imbalance bias and human base-rate neglect in human-AI collaborative decision-making. Using a within-subjects online experiment (N=46), we compared human reliance calibration on AI recommendations generated by models trained on balanced versus imbalanced datasets across a three-class medical diagnosis task. Results reveal a significant bidirectional amplification effect between the two biases: AI imbalance bias exacerbates human base-rate neglect, and conversely, human base-rate neglect impairs appropriate calibration of trust in AI outputs—leading to systematic overreliance or underreliance and degrading overall decision calibration. We introduce the novel construct of “compound human-AI bias” to characterize this coupled, systemic bias propagation. Moving beyond unidirectional attribution frameworks, our work advances an interactionist perspective, empirically demonstrating how algorithmic and cognitive biases co-evolve in sociotechnical systems. These findings provide both theoretical grounding and empirical evidence for trustworthy AI design and targeted human-AI collaboration interventions.

Technology Category

Application Category

📝 Abstract
Humans increasingly interact with artificial intelligence (AI) in decision-making. However, both AI and humans are prone to biases. While AI and human biases have been studied extensively in isolation, this paper examines their complex interaction. Specifically, we examined how class imbalance as an AI bias affects people's ability to appropriately rely on an AI-based decision-support system, and how it interacts with base rate neglect as a human bias. In a within-subject online study (N= 46), participants classified three diseases using an AI-based decision-support system trained on either a balanced or unbalanced dataset. We found that class imbalance disrupted participants' calibration of AI reliance. Moreover, we observed mutually reinforcing effects between class imbalance and base rate neglect, offering evidence of a compound human-AI bias. Based on these findings, we advocate for an interactionist perspective and further research into the mutually reinforcing effects of biases in human-AI interaction.
Problem

Research questions and friction points this paper is trying to address.

Examining how AI class imbalance affects human reliance calibration
Investigating interaction between AI class imbalance and human base rate neglect
Identifying compound bias effects in human-AI collaborative decision-making
Innovation

Methods, ideas, or system contributions that make the work stand out.

Class imbalance disrupts AI reliance calibration
Mutually reinforcing effects between AI and human biases
Interactionist perspective for compound human-AI bias
🔎 Similar Papers
No similar papers found.
N
Nick von Felten
University of St. Gallen, Switzerland
Johannes Schöning
Johannes Schöning
St. Gallen University, UCL
Human-Computer InteractionHuman-Centered AIUbicompGeoinformaticsLocation-based Services
Klaus Opwis
Klaus Opwis
University of Basel, Switzerland
Cognitive PsychologyMemoryHuman Computer Interaction
N
Nicolas Scharowski
Center for General Psychology and Methodology, University of Basel, Switzerland