π€ AI Summary
Existing LIME methods support only single-image analysis and lack interactive mechanisms, limiting comprehensive understanding of model decision rationales. To address this, this work proposes LIMEVisβthe first visualization tool enabling synchronized, interactive exploration of LIME explanations across multiple images. By integrating controllable image segmentation and parallel comparative analysis, LIMEVis allows users to directly manipulate explanation outputs to probe model behavior. This approach substantially enhances the ability to identify common visual features relied upon by the model and facilitates interactive validation of how specific regions influence predictions. Consequently, LIMEVis advances the practical utility and analytical depth of explainable AI in computer vision applications.
π Abstract
Explainable Artificial Intelligence (XAI) has gained importance in interpreting model predictions. Among leading techniques for XAI, Local Interpretable Model-agnostic Explanations (LIME) is most frequently utilized as it notably helps people's understanding of complex models. However, LIME's analysis is constrained to a single image at a time. Besides, it lacks interaction mechanisms for observing the LIME's results and direct manipulations of factors affecting the results. To address these issues, we introduce an interactive visualization tool, LIMEVis, which improves the analysis workflow of LIME by enabling users to explore multiple LIME results simultaneously and modify them directly. With LIMEVis, we could conveniently identify common features in images that a model seems to mainly consider for category classification. Additionally, by interactively modifying the LIME results, we could determine which segments in an image influence the model's classification.