🤖 AI Summary
Existing eXplainable AI (XAI) methods struggle to support rapid human-AI trust establishment in high-stakes scenarios (e.g., emergency response) due to their reliance on explicit user feedback and lack of adaptivity in explanation generation. Method: This paper proposes an adaptive XAI framework leveraging multimodal implicit feedback—specifically EEG, ECG, and eye-tracking signals—to model dynamic, coupled cognitive-affective trust states. The framework enables non-invasive, personalized explanation generation and real-time adaptation without requiring explicit user input. A multi-objective optimization-driven approach facilitates online operator state perception and adaptive explanation strategy selection. Contribution/Results: Experiments demonstrate significant improvements in both trust formation speed and decision-making efficiency under high-pressure conditions, establishing a novel paradigm for trustworthy human-AI collaboration.
📝 Abstract
Effective human-AI teaming heavily depends on swift trust, particularly in high-stakes scenarios such as emergency response, where timely and accurate decision-making is critical. In these time-sensitive and cognitively demanding settings, adaptive explainability is essential for fostering trust between human operators and AI systems. However, existing explainable AI (XAI) approaches typically offer uniform explanations and rely heavily on explicit feedback mechanisms, which are often impractical in such high-pressure scenarios. To address this gap, we propose a conceptual framework for adaptive XAI that operates non-intrusively by responding to users' real-time cognitive and emotional states through implicit feedback, thereby enhancing swift trust in high-stakes environments. The proposed adaptive explainability trust framework (AXTF) leverages physiological and behavioral signals, such as EEG, ECG, and eye tracking, to infer user states and support explanation adaptation. At its core is a multi-objective, personalized trust estimation model that maps workload, stress, and emotion to dynamic trust estimates. These estimates guide the modulation of explanation features enabling responsive and personalized support that promotes swift trust in human-AI collaboration. This conceptual framework establishes a foundation for developing adaptive, non-intrusive XAI systems tailored to the rigorous demands of high-pressure, time-sensitive environments.