🤖 AI Summary
This study addresses the challenge that AI explanations often fail to accommodate the heterogeneous cognitive capacities, usage contexts, and ethical concerns of diverse stakeholders—including developers, domain experts, end users, and the general public. To bridge this gap, we propose a tri-layered explainability framework integrating cognitive, contextual, and ethical dimensions. Methodologically, our approach is the first to enable coordinated explanation across the algorithmic layer (technical fidelity), human-centered layer (interactive adaptability), and societal layer (value alignment). We innovatively augment the societal layer with large language models to enhance natural-language explanation generation, shifting eXplainable AI (XAI) from static outputs toward dynamic trust co-construction. Empirical validation in high-stakes domains—such as healthcare and judicial decision support—demonstrates significant improvements in technical fidelity, user comprehension, and social accountability, establishing a practical, stakeholder-aware paradigm for trustworthy AI explanation.
📝 Abstract
The growing application of artificial intelligence in sensitive domains has intensified the demand for systems that are not only accurate but also explainable and trustworthy. Although explainable AI (XAI) methods have proliferated, many do not consider the diverse audiences that interact with AI systems: from developers and domain experts to end-users and society. This paper addresses how trust in AI is influenced by the design and delivery of explanations and proposes a multilevel framework that aligns explanations with the epistemic, contextual, and ethical expectations of different stakeholders. The framework consists of three layers: algorithmic and domain-based, human-centered, and social explainability. We highlight the emerging role of Large Language Models (LLMs) in enhancing the social layer by generating accessible, natural language explanations. Through illustrative case studies, we demonstrate how this approach facilitates technical fidelity, user engagement, and societal accountability, reframing XAI as a dynamic, trust-building process.