๐ค AI Summary
Existing eXplainable AI (XAI) methods overemphasize algorithmic transparency, yielding abstract, static, and context-insensitive explanations that inadequately support effective human decision-making.
Method: This paper introduces โexplanatory AIโ (xAI)โa novel paradigm positioning generative AI as a collaborative partner in human understanding. It employs narrative expression, adaptive personalization, and progressive disclosure to deliver contextualized, multimodal, and dynamic explanations. Grounded in human-centered cognitive principles, we systematically formulate an eight-dimensional conceptual model and implement a user-driven system via rapid contextual design, integrating generative AI, narrative generation, multimodal interaction, and adaptive recommendation.
Contribution/Results: The framework enables a paradigm shift from algorithmic transparency to human-centered understanding. Empirical evaluation with healthcare professionals demonstrates significantly higher preference for its context-sensitive explanations, validating both efficacy and practical necessity.
๐ Abstract
Current explainable AI (XAI) approaches prioritize algorithmic transparency and present explanations in abstract, non-adaptive formats that often fail to support meaningful end-user understanding. This paper introduces "Explanatory AI" as a complementary paradigm that leverages generative AI capabilities to serve as explanatory partners for human understanding rather than providers of algorithmic transparency. While XAI reveals algorithmic decision processes for model validation, Explanatory AI addresses contextual reasoning to support human decision-making in sociotechnical contexts. We develop a definition and systematic eight-dimensional conceptual model distinguishing Explanatory AI through narrative communication, adaptive personalization, and progressive disclosure principles. Empirical validation through Rapid Contextual Design methodology with healthcare professionals demonstrates that users consistently prefer context-sensitive, multimodal explanations over technical transparency. Our findings reveal the practical urgency for AI systems designed for human comprehension rather than algorithmic introspection, establishing a comprehensive research agenda for advancing user-centered AI explanation approaches across diverse domains and cultural contexts.