Adaptive XAI in High Stakes Environments: Modeling Swift Trust with Multimodal Feedback in Human AI Teams

📅 2025-07-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing eXplainable AI (XAI) methods struggle to support rapid human-AI trust establishment in high-stakes scenarios (e.g., emergency response) due to their reliance on explicit user feedback and lack of adaptivity in explanation generation. Method: This paper proposes an adaptive XAI framework leveraging multimodal implicit feedback—specifically EEG, ECG, and eye-tracking signals—to model dynamic, coupled cognitive-affective trust states. The framework enables non-invasive, personalized explanation generation and real-time adaptation without requiring explicit user input. A multi-objective optimization-driven approach facilitates online operator state perception and adaptive explanation strategy selection. Contribution/Results: Experiments demonstrate significant improvements in both trust formation speed and decision-making efficiency under high-pressure conditions, establishing a novel paradigm for trustworthy human-AI collaboration.

Technology Category

Application Category

📝 Abstract
Effective human-AI teaming heavily depends on swift trust, particularly in high-stakes scenarios such as emergency response, where timely and accurate decision-making is critical. In these time-sensitive and cognitively demanding settings, adaptive explainability is essential for fostering trust between human operators and AI systems. However, existing explainable AI (XAI) approaches typically offer uniform explanations and rely heavily on explicit feedback mechanisms, which are often impractical in such high-pressure scenarios. To address this gap, we propose a conceptual framework for adaptive XAI that operates non-intrusively by responding to users' real-time cognitive and emotional states through implicit feedback, thereby enhancing swift trust in high-stakes environments. The proposed adaptive explainability trust framework (AXTF) leverages physiological and behavioral signals, such as EEG, ECG, and eye tracking, to infer user states and support explanation adaptation. At its core is a multi-objective, personalized trust estimation model that maps workload, stress, and emotion to dynamic trust estimates. These estimates guide the modulation of explanation features enabling responsive and personalized support that promotes swift trust in human-AI collaboration. This conceptual framework establishes a foundation for developing adaptive, non-intrusive XAI systems tailored to the rigorous demands of high-pressure, time-sensitive environments.
Problem

Research questions and friction points this paper is trying to address.

Enhancing swift trust in high-stakes human-AI teams
Adapting XAI using implicit cognitive and emotional feedback
Personalizing trust estimation via physiological and behavioral signals
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adaptive XAI with multimodal implicit feedback
Real-time cognitive and emotional state monitoring
Multi-objective personalized trust estimation model
N
Nishani Fernando
Deakin University, Geelong, Victoria, Australia
Bahareh Nakisa
Bahareh Nakisa
Senior Lecturer in AI, Deakin University
Human-Machine TeamingTrustAffective ComputingEthical AICognition
Adnan Ahmad
Adnan Ahmad
Deakin University, Australia
Machine LearningFederated LearningData Science
M
Mohammad Naim Rastgoo
Monash University, Melbourne, Victoria, Australia