ProfileXAI: User-Adaptive Explainable AI

📅 2025-10-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address heterogeneous user requirements for model interpretability, this paper proposes a model- and domain-agnostic explainable AI (XAI) framework. Methodologically, it integrates post-hoc explanation techniques (SHAP, LIME, Anchor) with retrieval-augmented large language models (LLMs), implementing a user-profile-conditioned explanation mechanism that dynamically selects the optimal explainer. Explanations are generated via multimodal knowledge base indexing and conversational prompt engineering, yielding natural-language outputs with low redundancy and high fidelity. The key contributions are personalized explanation strategy adaptation and cross-user consistency preservation. Experimental evaluation on heart disease and thyroid cancer datasets demonstrates complementary strengths among explainers: average user satisfaction reaches 4.1/5, expert-assessed explanation quality scores 3.77/5, and token consumption remains stable (σ ≤ 13%).

Technology Category

Application Category

📝 Abstract
ProfileXAI is a model- and domain-agnostic framework that couples post-hoc explainers (SHAP, LIME, Anchor) with retrieval - augmented LLMs to produce explanations for different types of users. The system indexes a multimodal knowledge base, selects an explainer per instance via quantitative criteria, and generates grounded narratives with chat-enabled prompting. On Heart Disease and Thyroid Cancer datasets, we evaluate fidelity, robustness, parsimony, token use, and perceived quality. No explainer dominates: LIME achieves the best fidelity--robustness trade-off (Infidelity $le 0.30$, $L<0.7$ on Heart Disease); Anchor yields the sparsest, low-token rules; SHAP attains the highest satisfaction ($ar{x}=4.1$). Profile conditioning stabilizes tokens ($σle 13%$) and maintains positive ratings across profiles ($ar{x}ge 3.7$, with domain experts at $3.77$), enabling efficient and trustworthy explanations.
Problem

Research questions and friction points this paper is trying to address.

Generating user-adaptive explanations using explainable AI framework
Evaluating fidelity and robustness of post-hoc explainers on datasets
Stabilizing explanation quality across different user profiles
Innovation

Methods, ideas, or system contributions that make the work stand out.

Framework couples explainers with retrieval-augmented LLMs
System indexes multimodal knowledge base for explanations
Profile conditioning stabilizes tokens and maintains ratings
🔎 Similar Papers
No similar papers found.
G
Gilber A. Corrales
Facultad de Ingeniería y Ciencias Básicas, Universidad Autónoma de Occidente, Cali 760000, Colombia; and with the Escuela de Gobierno, GobLab, Universidad Adolfo Ibáñez, Santiago de Chile 8320000, Chile
C
Carlos Andrés Ferro Sánchez
Facultad de Ingeniería y Ciencias Básicas, Universidad Autónoma de Occidente, Cali 760000, Colombia
R
Reinel Tabares-Soto
Escuela de Gobierno, GobLab, Universidad Adolfo Ibáñez, and the Facultad de Ingeniería y Ciencias, Universidad Adolfo Ibáñez, Santiago de Chile 8320000, Chile; and with the Departamento de Electrónica y Automatización, Universidad Autónoma de Manizales, and the Departamento de Sistemas e Informática, Universidad de Caldas, Manizales 170003, Colombia
J
Jesús Alfonso López Sotelo
Facultad de Ingeniería y Ciencias Básicas, Universidad Autónoma de Occidente, Cali 760000, Colombia
Gonzalo A. Ruz
Gonzalo A. Ruz
Professor, Universidad Adolfo Ibáñez
Machine LearningBayesian NetworksBoolean NetworksGene Regulatory Networks
J
Johan Sebastian Piña Durán
Escuela de Gobierno, GobLab, Universidad Adolfo Ibáñez, Santiago de Chile 8320000, Chile; and with Universidad Autónoma de Manizales, Manizales 170003, Colombia