The Impact and Feasibility of Self-Confidence Shaping for AI-Assisted Decision-Making

📅 2025-02-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In high-stakes human-AI collaborative decision-making (e.g., healthcare, finance), suboptimal human reliance—either over- or under-trust—in AI severely undermines team performance. To address this, we propose a “self-confidence shaping” intervention paradigm designed to calibrate human trust and optimize AI reliance. Methodologically, we first theoretically quantify the upper bound on team performance improvement attainable through confidence shaping; then formulate a self-confidence prediction task to uncover an intervenable causal link between real-time affective states and confidence; finally, integrate behavioral experiments, machine learning (logistic regression/SVM), and causal inference to validate intervention feasibility. A 121-participant empirical study demonstrates: (1) near 50% improvement in team decision-making efficacy; (2) 67% accuracy in confidence prediction using only lightweight models; and (3) practical feasibility of real-time, affect-informed confidence interventions. This work establishes an interpretable, deployable pathway for trust calibration in trustworthy human-AI collaboration.

Technology Category

Application Category

📝 Abstract
In AI-assisted decision-making, it is crucial but challenging for humans to appropriately rely on AI, especially in high-stakes domains such as finance and healthcare. This paper addresses this problem from a human-centered perspective by presenting an intervention for self-confidence shaping, designed to calibrate self-confidence at a targeted level. We first demonstrate the impact of self-confidence shaping by quantifying the upper-bound improvement in human-AI team performance. Our behavioral experiments with 121 participants show that self-confidence shaping can improve human-AI team performance by nearly 50% by mitigating both over- and under-reliance on AI. We then introduce a self-confidence prediction task to identify when our intervention is needed. Our results show that simple machine-learning models achieve 67% accuracy in predicting self-confidence. We further illustrate the feasibility of such interventions. The observed relationship between sentiment and self-confidence suggests that modifying sentiment could be a viable strategy for shaping self-confidence. Finally, we outline future research directions to support the deployment of self-confidence shaping in a real-world scenario for effective human-AI collaboration.
Problem

Research questions and friction points this paper is trying to address.

Calibrating human self-confidence in AI decisions.
Reducing over- and under-reliance on AI systems.
Predicting self-confidence for targeted interventions.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Self-confidence shaping improves human-AI team performance.
Machine learning predicts self-confidence with 67% accuracy.
Modifying sentiment is a viable strategy for self-confidence shaping.
🔎 Similar Papers
No similar papers found.