🤖 AI Summary
To address the “one-size-fits-all” problem in explanations for image classification models, this paper proposes the first human-centered explainable AI (XAI) framework that explicitly models users’ domain expertise as a core variable in explanation generation. Methodologically, it integrates user modeling, informativeness-driven training sample selection, local explanation ensembling, and simulatability evaluation to dynamically generate personalized explanations—comprising illustrative examples, local feature attributions, and decision logic—tailored to individual users’ expertise levels. Evaluated on multiple benchmark datasets via simulation and in a real-user study with 100 participants, the framework significantly improves users’ ability to predict model behavior (i.e., simulatability), outperforming state-of-the-art baselines across all metrics. Its core contribution is the first end-to-end mapping from users’ domain knowledge to explanation strategies, advancing XAI from generic, model-centric explanations toward truly human-centered, adaptive interpretability.
📝 Abstract
Effectively explaining decisions of black-box machine learning models is critical to responsible deployment of AI systems that rely on them. Recognizing their importance, the field of explainable AI (XAI) provides several techniques to generate these explanations. Yet, there is relatively little emphasis on the user (the explainee) in this growing body of work and most XAI techniques generate "one-size-fits-all'' explanations. To bridge this gap and achieve a step closer towards human-centered XAI, we present I-CEE, a framework that provides Image Classification Explanations tailored to User Expertise. Informed by existing work, I-CEE explains the decisions of image classification models by providing the user with an informative subset of training data (i.e., example images), corresponding local explanations, and model decisions. However, unlike prior work, I-CEE models the informativeness of the example images to depend on user expertise, resulting in different examples for different users. We posit that by tailoring the example set to user expertise, I-CEE can better facilitate users' understanding and simulatability of the model. To evaluate our approach, we conduct detailed experiments in both simulation and with human participants (N = 100) on multiple datasets. Experiments with simulated users show that I-CEE improves users' ability to accurately predict the model's decisions (simulatability) compared to baselines, providing promising preliminary results. Experiments with human participants demonstrate that our method significantly improves user simulatability accuracy, highlighting the importance of human-centered XAI.