🤖 AI Summary
To address heterogeneous user requirements for model interpretability, this paper proposes a model- and domain-agnostic explainable AI (XAI) framework. Methodologically, it integrates post-hoc explanation techniques (SHAP, LIME, Anchor) with retrieval-augmented large language models (LLMs), implementing a user-profile-conditioned explanation mechanism that dynamically selects the optimal explainer. Explanations are generated via multimodal knowledge base indexing and conversational prompt engineering, yielding natural-language outputs with low redundancy and high fidelity. The key contributions are personalized explanation strategy adaptation and cross-user consistency preservation. Experimental evaluation on heart disease and thyroid cancer datasets demonstrates complementary strengths among explainers: average user satisfaction reaches 4.1/5, expert-assessed explanation quality scores 3.77/5, and token consumption remains stable (σ ≤ 13%).
📝 Abstract
ProfileXAI is a model- and domain-agnostic framework that couples post-hoc explainers (SHAP, LIME, Anchor) with retrieval - augmented LLMs to produce explanations for different types of users. The system indexes a multimodal knowledge base, selects an explainer per instance via quantitative criteria, and generates grounded narratives with chat-enabled prompting. On Heart Disease and Thyroid Cancer datasets, we evaluate fidelity, robustness, parsimony, token use, and perceived quality. No explainer dominates: LIME achieves the best fidelity--robustness trade-off (Infidelity $le 0.30$, $L<0.7$ on Heart Disease); Anchor yields the sparsest, low-token rules; SHAP attains the highest satisfaction ($ar{x}=4.1$). Profile conditioning stabilizes tokens ($σle 13%$) and maintains positive ratings across profiles ($ar{x}ge 3.7$, with domain experts at $3.77$), enabling efficient and trustworthy explanations.