Overconfident and Unconfident AI Hinder Human-AI Collaboration

📅 2024-02-12
🏛️ arXiv.org
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
This study identifies a critical bidirectional risk of miscalibrated AI confidence in human-AI collaboration: overconfidence induces user misuse, while underconfidence triggers abandonment—both eroding trust, recommendation adherence, and decision effectiveness. Through a series of controlled behavioral experiments, integrating confidence calibration assessment with multidimensional trust and adoption metrics, we provide the first systematic empirical validation that confidence miscalibration significantly impairs collaborative quality. We further demonstrate that while transparency interventions improve users’ ability to detect miscalibration, they may paradoxically foster new forms of distrust. Consequently, we propose the design principle “calibration before transparency.” Results show that merely exposing uncalibrated confidence scores degrades trustworthy collaboration; only rigorously calibrated confidence estimates reliably support robust human-AI joint decision-making.

Technology Category

Application Category

📝 Abstract
AI transparency is a central pillar of responsible AI deployment and effective human-AI collaboration. A critical approach is communicating uncertainty, such as displaying AI's confidence level, or its correctness likelihood (CL), to users. However, these confidence levels are often uncalibrated, either overestimating or underestimating actual CL, posing risks and harms to human-AI collaboration. This study examines the effects of uncalibrated AI confidence on users' trust in AI, AI advice adoption, and collaboration outcomes. We further examined the impact of increased transparency, achieved through trust calibration support, on these outcomes. Our results reveal that uncalibrated AI confidence leads to both the misuse of overconfident AI and disuse of unconfident AI, thereby hindering outcomes of human-AI collaboration. Deficiency of trust calibration support exacerbates this issue by making it harder to detect uncalibrated confidence, promoting misuse and disuse of AI. Conversely, trust calibration support aids in recognizing uncalibration and reducing misuse, but it also fosters distrust and causes disuse of AI. Our findings highlight the importance of AI confidence calibration for enhancing human-AI collaboration and suggest directions for AI design and regulation.
Problem

Research questions and friction points this paper is trying to address.

Miscalibrated AI confidence impairs user reliance and decision efficacy
Users struggle to detect AI miscalibration during decision-making processes
Communicating calibration levels helps detection but reduces trust and efficacy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Examined effects of miscalibrated AI confidence on users
Tested communication of AI calibration levels to users
Explored design implications for AI miscalibration risks
🔎 Similar Papers
No similar papers found.