From Explainable to Explanatory Artificial Intelligence: Toward a New Paradigm for Human-Centered Explanations through Generative AI

๐Ÿ“… 2025-08-08
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Existing eXplainable AI (XAI) methods overemphasize algorithmic transparency, yielding abstract, static, and context-insensitive explanations that inadequately support effective human decision-making. Method: This paper introduces โ€œexplanatory AIโ€ (xAI)โ€”a novel paradigm positioning generative AI as a collaborative partner in human understanding. It employs narrative expression, adaptive personalization, and progressive disclosure to deliver contextualized, multimodal, and dynamic explanations. Grounded in human-centered cognitive principles, we systematically formulate an eight-dimensional conceptual model and implement a user-driven system via rapid contextual design, integrating generative AI, narrative generation, multimodal interaction, and adaptive recommendation. Contribution/Results: The framework enables a paradigm shift from algorithmic transparency to human-centered understanding. Empirical evaluation with healthcare professionals demonstrates significantly higher preference for its context-sensitive explanations, validating both efficacy and practical necessity.

Technology Category

Application Category

๐Ÿ“ Abstract
Current explainable AI (XAI) approaches prioritize algorithmic transparency and present explanations in abstract, non-adaptive formats that often fail to support meaningful end-user understanding. This paper introduces "Explanatory AI" as a complementary paradigm that leverages generative AI capabilities to serve as explanatory partners for human understanding rather than providers of algorithmic transparency. While XAI reveals algorithmic decision processes for model validation, Explanatory AI addresses contextual reasoning to support human decision-making in sociotechnical contexts. We develop a definition and systematic eight-dimensional conceptual model distinguishing Explanatory AI through narrative communication, adaptive personalization, and progressive disclosure principles. Empirical validation through Rapid Contextual Design methodology with healthcare professionals demonstrates that users consistently prefer context-sensitive, multimodal explanations over technical transparency. Our findings reveal the practical urgency for AI systems designed for human comprehension rather than algorithmic introspection, establishing a comprehensive research agenda for advancing user-centered AI explanation approaches across diverse domains and cultural contexts.
Problem

Research questions and friction points this paper is trying to address.

Shifting from algorithmic transparency to human-centered explanations using generative AI
Addressing contextual reasoning for better human decision-making in sociotechnical settings
Developing adaptive, narrative-driven explanations preferred over technical transparency
Innovation

Methods, ideas, or system contributions that make the work stand out.

Generative AI for human-centered explanations
Narrative communication and adaptive personalization
Context-sensitive multimodal explanation design
๐Ÿ”Ž Similar Papers
No similar papers found.
Christian Meske
Christian Meske
Ruhr-University Bochum
Digital CollaborationExplainable Artificial IntelligenceGenerative AITechnology Acceptance
J
Justin Brenne
Ruhr University Bochum, 44801 Bochum, Germany
E
Erdi Uenal
Ruhr University Bochum, 44801 Bochum, Germany
S
Sabahat Oelcer
Ruhr West University of Applied Sciences, 46236 Bottrop, Germany
A
Ayseguel Doganguen
Ruhr West University of Applied Sciences, 46236 Bottrop, Germany