🤖 AI Summary
Automated bias—where clinicians overrely on AI recommendations—can exacerbate diagnostic errors, especially when AI outputs incorrect predictions. Selective prediction (i.e., AI issuing recommendations only when highly confident) is widely assumed to mitigate such risks by approximating human-only decision-making.
Method: A controlled user study with 259 clinicians evaluated diagnostic and treatment accuracy under three conditions: no AI, full AI assistance, and selective AI prompting (AI silent below confidence threshold).
Contribution/Results: While selective prediction improved overall accuracy from 56% (full AI, error-prone cases) to 64%, it significantly increased missed diagnoses (+18%) and missed treatments (+35%). This reveals a novel behavioral bias: clinicians systematically under-respond when AI remains silent. The study provides the first empirical challenge to the implicit assumption that selective prediction is behaviorally equivalent to no-AI decisions. It underscores the necessity of jointly modeling AI output omission and human cognitive responses, offering critical behavioral evidence and design warnings for trustworthy AI deployment in clinical settings.
📝 Abstract
AI has the potential to augment human decision making. However, even high-performing models can produce inaccurate predictions when deployed. These inaccuracies, combined with automation bias, where humans overrely on AI predictions, can result in worse decisions. Selective prediction, in which potentially unreliable model predictions are hidden from users, has been proposed as a solution. This approach assumes that when AI abstains and informs the user so, humans make decisions as they would without AI involvement. To test this assumption, we study the effects of selective prediction on human decisions in a clinical context. We conducted a user study of 259 clinicians tasked with diagnosing and treating hospitalized patients. We compared their baseline performance without any AI involvement to their AI-assisted accuracy with and without selective prediction. Our findings indicate that selective prediction mitigates the negative effects of inaccurate AI in terms of decision accuracy. Compared to no AI assistance, clinician accuracy declined when shown inaccurate AI predictions (66% [95% CI: 56%-75%] vs. 56% [95% CI: 46%-66%]), but recovered under selective prediction (64% [95% CI: 54%-73%]). However, while selective prediction nearly maintains overall accuracy, our results suggest that it alters patterns of mistakes: when informed the AI abstains, clinicians underdiagnose (18% increase in missed diagnoses) and undertreat (35% increase in missed treatments) compared to no AI input at all. Our findings underscore the importance of empirically validating assumptions about how humans engage with AI within human-AI systems.