Mind the XAI Gap: A Human-Centered LLM Framework for Democratizing Explainable AI

📅 2025-06-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
The “black-box” nature of AI decision-making necessitates explainable AI (XAI), yet existing methods primarily target experts and fail to serve the general public. This work proposes the first domain-, model-, and explanation-agnostic dual-audience XAI framework: leveraging large language models (LLMs) and in-context learning, it simultaneously generates technical explanations for experts and simplified, accessible explanations for non-experts—unifying rigor and inclusivity. Key contributions include: (1) the first unified explanation generation mechanism tailored to both audiences; (2) an empirically grounded XAI semantic dictionary serving as a reproducible evaluation benchmark; and (3) advancing XAI from an expert-centric tool to a trustworthy, publicly accessible infrastructure. Experiments achieve high explanation fidelity (Spearman ρ = 0.92); a user study (N = 56) demonstrates significantly improved comprehension among non-experts; and extensive validation across 40+ data–model–XAI combinations confirms strong generalizability and reproducibility.

Technology Category

Application Category

📝 Abstract
Artificial Intelligence (AI) is rapidly embedded in critical decision-making systems, however their foundational ``black-box'' models require eXplainable AI (XAI) solutions to enhance transparency, which are mostly oriented to experts, making no sense to non-experts. Alarming evidence about AI's unprecedented human values risks brings forward the imperative need for transparent human-centered XAI solutions. In this work, we introduce a domain-, model-, explanation-agnostic, generalizable and reproducible framework that ensures both transparency and human-centered explanations tailored to the needs of both experts and non-experts. The framework leverages Large Language Models (LLMs) and employs in-context learning to convey domain- and explainability-relevant contextual knowledge into LLMs. Through its structured prompt and system setting, our framework encapsulates in one response explanations understandable by non-experts and technical information to experts, all grounded in domain and explainability principles. To demonstrate the effectiveness of our framework, we establish a ground-truth contextual ``thesaurus'' through a rigorous benchmarking with over 40 data, model, and XAI combinations for an explainable clustering analysis of a well-being scenario. Through a comprehensive quality and human-friendliness evaluation of our framework's explanations, we prove high content quality through strong correlations with ground-truth explanations (Spearman rank correlation=0.92) and improved interpretability and human-friendliness to non-experts through a user study (N=56). Our overall evaluation confirms trust in LLMs as HCXAI enablers, as our framework bridges the above Gaps by delivering (i) high-quality technical explanations aligned with foundational XAI methods and (ii) clear, efficient, and interpretable human-centered explanations for non-experts.
Problem

Research questions and friction points this paper is trying to address.

Democratizing XAI for both experts and non-experts
Bridging the gap between technical and human-centered explanations
Ensuring transparency and interpretability in AI decision-making
Innovation

Methods, ideas, or system contributions that make the work stand out.

Human-centered LLM framework for XAI
Leverages in-context learning with LLMs
Domain-agnostic prompt system for explanations
🔎 Similar Papers
No similar papers found.