🤖 AI Summary
Existing XAI evaluation methods are primarily designed for image classification and fail to accommodate the pixel-level predictions and spatial-contextual dependencies inherent in semantic segmentation. Method: This paper introduces the first systematic XAI evaluation framework specifically tailored for semantic segmentation. It integrates pixel-wise explanation quality quantification with spatial-contextual complexity modeling, proposing novel metrics that jointly assess local fidelity and structural consistency. We adapt class activation mapping (CAM) and related methods to dense prediction outputs and establish an end-to-end interpretability analysis pipeline. Contribution/Results: Extensive experiments across multiple state-of-the-art segmentation models and XAI techniques demonstrate that our framework significantly improves the comparability, robustness, and trustworthiness of explanations. It provides a reproducible, fine-grained evaluation benchmark for validating transparency in semantic segmentation models.
📝 Abstract
Ensuring transparency and trust in artificial intelligence (AI) models is essential, particularly as they are increasingly applied in safety-critical and high-stakes domains. Explainable AI (XAI) has emerged as a promising approach to address this challenge, yet the rigorous evaluation of XAI methods remains crucial for optimizing the trade-offs between model complexity, predictive performance, and interpretability. While extensive progress has been achieved in evaluating XAI techniques for classification tasks, evaluation strategies tailored to semantic segmentation remain relatively underexplored. This work introduces a comprehensive and systematic evaluation framework specifically designed for assessing XAI in semantic segmentation, explicitly accounting for both spatial and contextual task complexities. The framework employs pixel-level evaluation strategies and carefully designed metrics to provide fine-grained interpretability insights. Simulation results using recently adapted class activation mapping (CAM)-based XAI schemes demonstrate the efficiency, robustness, and reliability of the proposed methodology. These findings contribute to advancing transparent, trustworthy, and accountable semantic segmentation models.