Explainability in Context: A Multilevel Framework Aligning AI Explanations with Stakeholder with LLMs

📅 2025-06-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the challenge that AI explanations often fail to accommodate the heterogeneous cognitive capacities, usage contexts, and ethical concerns of diverse stakeholders—including developers, domain experts, end users, and the general public. To bridge this gap, we propose a tri-layered explainability framework integrating cognitive, contextual, and ethical dimensions. Methodologically, our approach is the first to enable coordinated explanation across the algorithmic layer (technical fidelity), human-centered layer (interactive adaptability), and societal layer (value alignment). We innovatively augment the societal layer with large language models to enhance natural-language explanation generation, shifting eXplainable AI (XAI) from static outputs toward dynamic trust co-construction. Empirical validation in high-stakes domains—such as healthcare and judicial decision support—demonstrates significant improvements in technical fidelity, user comprehension, and social accountability, establishing a practical, stakeholder-aware paradigm for trustworthy AI explanation.

Technology Category

Application Category

📝 Abstract
The growing application of artificial intelligence in sensitive domains has intensified the demand for systems that are not only accurate but also explainable and trustworthy. Although explainable AI (XAI) methods have proliferated, many do not consider the diverse audiences that interact with AI systems: from developers and domain experts to end-users and society. This paper addresses how trust in AI is influenced by the design and delivery of explanations and proposes a multilevel framework that aligns explanations with the epistemic, contextual, and ethical expectations of different stakeholders. The framework consists of three layers: algorithmic and domain-based, human-centered, and social explainability. We highlight the emerging role of Large Language Models (LLMs) in enhancing the social layer by generating accessible, natural language explanations. Through illustrative case studies, we demonstrate how this approach facilitates technical fidelity, user engagement, and societal accountability, reframing XAI as a dynamic, trust-building process.
Problem

Research questions and friction points this paper is trying to address.

Align AI explanations with diverse stakeholder needs
Enhance trust via multilevel explainability framework
Leverage LLMs for accessible social explanations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multilevel framework aligning AI explanations with stakeholders
LLMs enhance social explainability via natural language
Three layers: algorithmic, human-centered, social explainability
🔎 Similar Papers
No similar papers found.
M
Marilyn Bello
Andalusian Research Institute in Data Science and Computational Intelligence, University of Granada, Granada, Spain
Rafael Bello
Rafael Bello
Professor of Computer Science, Universidad Central "Marta Abreu" de Las Villas
Artificial intelligence
M
Maria-Matilde García
Department of Computer Science, Universidad Central “Marta Abreu” de Las Villas, Santa Clara, Cuba
A
Ann Nowé
Artificial Intelligence Lab, Vrije Universiteit Brussel, Brussel, Belgium
I
Iván Sevillano-García
Andalusian Research Institute in Data Science and Computational Intelligence, University of Granada, Granada, Spain
Francisco Herrera
Francisco Herrera
Professor Computer Science and AI, DaSCI Research Institute, Granada University, Spain
Artificial IntelligenceComputational IntelligenceData ScienceTrustworthy AI