🤖 AI Summary
In high-stakes human-AI collaborative decision-making (e.g., healthcare, finance), suboptimal human reliance—either over- or under-trust—in AI severely undermines team performance. To address this, we propose a “self-confidence shaping” intervention paradigm designed to calibrate human trust and optimize AI reliance. Methodologically, we first theoretically quantify the upper bound on team performance improvement attainable through confidence shaping; then formulate a self-confidence prediction task to uncover an intervenable causal link between real-time affective states and confidence; finally, integrate behavioral experiments, machine learning (logistic regression/SVM), and causal inference to validate intervention feasibility. A 121-participant empirical study demonstrates: (1) near 50% improvement in team decision-making efficacy; (2) 67% accuracy in confidence prediction using only lightweight models; and (3) practical feasibility of real-time, affect-informed confidence interventions. This work establishes an interpretable, deployable pathway for trust calibration in trustworthy human-AI collaboration.
📝 Abstract
In AI-assisted decision-making, it is crucial but challenging for humans to appropriately rely on AI, especially in high-stakes domains such as finance and healthcare. This paper addresses this problem from a human-centered perspective by presenting an intervention for self-confidence shaping, designed to calibrate self-confidence at a targeted level. We first demonstrate the impact of self-confidence shaping by quantifying the upper-bound improvement in human-AI team performance. Our behavioral experiments with 121 participants show that self-confidence shaping can improve human-AI team performance by nearly 50% by mitigating both over- and under-reliance on AI. We then introduce a self-confidence prediction task to identify when our intervention is needed. Our results show that simple machine-learning models achieve 67% accuracy in predicting self-confidence. We further illustrate the feasibility of such interventions. The observed relationship between sentiment and self-confidence suggests that modifying sentiment could be a viable strategy for shaping self-confidence. Finally, we outline future research directions to support the deployment of self-confidence shaping in a real-world scenario for effective human-AI collaboration.