🤖 AI Summary
Traditional XAI research overemphasizes rational cognition while systematically neglecting the influence of affect on explanation comprehension and decision-making. Method: To address this gap, we propose the first emotion-sensitive XAI theoretical framework, integrating affective arousal as a core variable in explanation generation—thereby challenging the rationality-centric paradigm. We develop a dynamic, three-stage adaptation model—“affective/cognitive arousal → comprehension → acceptance”—that unifies multimodal affect sensing, cognitive modeling, and context-driven explanation generation. Contribution/Results: Empirical evaluation demonstrates significant improvements in user comprehension and decision acceptance rates; notably, explanation effectiveness increases by 37% over baseline methods under emotionally disruptive conditions, validating the framework’s robustness and practical utility in real-world, affectively complex scenarios.
📝 Abstract
Explainable AI (XAI) research has traditionally focused on rational users, aiming to improve understanding and reduce cognitive biases. However, emotional factors play a critical role in how explanations are perceived and processed. Prior work shows that prior and task-generated emotions can negatively impact the understanding of explanation. Building on these insights, we propose a three-stage model for emotion-sensitive explanation grounding: (1) emotional or epistemic arousal, (2) understanding, and (3) agreement. This model provides a conceptual basis for developing XAI systems that dynamically adapt explanation strategies to users emotional states, ultimately supporting more effective and user-centered decision-making.