🤖 AI Summary
This study investigates how AI transparency—specifically, proactive communication of model capability boundaries and error patterns—affects human-AI collaboration, with emphasis on trust calibration and decision-support optimization. We propose a decision-tree-based method for model error attribution, enabling AI systems to generate interpretable, actionable performance insights—not merely single-point predictions. A user study in an income prediction task systematically evaluates the impact of explanation granularity on user trust, reliance behavior, and task performance. Results show that structured performance feedback significantly improves human decision accuracy (+12.3%) and effectively mitigates both overreliance and undertrust, enabling dynamic trust calibration. This work is the first to integrate fine-grained model error pattern modeling with human-centered evaluation, establishing a scalable, empirically grounded explanation paradigm for designing trustworthy human-AI collaborative systems.
📝 Abstract
The promise of human-AI teaming lies in humans and AI working together to achieve performance levels neither could accomplish alone. Effective communication between AI and humans is crucial for teamwork, enabling users to efficiently benefit from AI assistance. This paper investigates how AI communication impacts human-AI team performance. We examine AI explanations that convey an awareness of its strengths and limitations. To achieve this, we train a decision tree on the model's mistakes, allowing it to recognize and explain where and why it might err. Through a user study on an income prediction task, we assess the impact of varying levels of information and explanations about AI predictions. Our results show that AI performance insights enhance task performance, and conveying AI awareness of its strengths and weaknesses improves trust calibration. These findings highlight the importance of considering how information delivery influences user trust and reliance in AI-assisted decision-making.