🤖 AI Summary
To address the misalignment between existing XAI methods and human cognitive processes in high-stakes domains, this paper proposes a human-centered explanation framework grounded in Appraisal Theory. Innovatively integrating the Component Process Model (CPM) from emotion research into XAI, the framework models human cognitive appraisal of AI decisions along four dimensions: relevance, impact, coping potential, and normative significance. It enables context-sensitive, cognitively meaningful explanation generation—overcoming key limitations of conventional feature-importance or post-hoc attribution approaches. Empirical evaluation demonstrates significant improvements in users’ depth of understanding and trust in AI decisions. This work establishes the first systematic XAI paradigm that explicitly incorporates cognitive-scientific appraisal mechanisms, advancing the credible deployment of human-AI collaborative decision-making in high-risk applications.
📝 Abstract
Explainability remains a critical challenge in artificial intelligence (AI) systems, particularly in high stakes domains such as healthcare, finance, and decision support, where users must understand and trust automated reasoning. Traditional explainability methods such as feature importance and post-hoc justifications often fail to capture the cognitive processes that underlie human decision making, leading to either too technical or insufficiently meaningful explanations. We propose a novel appraisal based framework inspired by the Component Process Model (CPM) for explainability to address this gap. While CPM has traditionally been applied to emotion research, we use its appraisal component as a cognitive model for generating human aligned explanations. By structuring explanations around key appraisal dimensions such as relevance, implications, coping potential, and normative significance our framework provides context sensitive, cognitively meaningful justifications for AI decisions. This work introduces a new paradigm for generating intuitive, human-centred explanations in AI driven systems by bridging cognitive science and explainable AI.