π€ AI Summary
This study addresses the long-overlooked issue of explanation robustness in eXplainable AI (XAI) for model optimization. Method: We formally establish *XAI consistency*βthe agreement among feature attribution methods (e.g., Integrated Gradients, Grad-CAM, Saliency)βas a core optimization objective co-equal with predictive performance. We propose a quantitative XAI consistency metric and embed it into a multi-objective hyperparameter-and-architecture joint optimization framework, leveraging cross-method attribution validation and efficient search via the SPOT toolkit. Contribution/Results: We identify three distinct trade-off regions in the model architecture space between predictive accuracy and interpretability. Empirically, high-XAI-consistency models exhibit superior out-of-distribution generalization. Our approach discovers a Pareto-optimal subset of models achieving both high prediction accuracy and high XAI consistency, substantially mitigating overfitting and establishing a novel paradigm for trustworthy AI.
π Abstract
Despite the growing interest in Explainable Artificial Intelligence (XAI), explainability is rarely considered during hyperparameter tuning or neural architecture optimization, where the focus remains primarily on minimizing predictive loss. In this work, we introduce the novel concept of XAI consistency, defined as the agreement among different feature attribution methods, and propose new metrics to quantify it. For the first time, we integrate XAI consistency directly into the hyperparameter tuning objective, creating a multi-objective optimization framework that balances predictive performance with explanation robustness. Implemented within the Sequential Parameter Optimization Toolbox (SPOT), our approach uses both weighted aggregation and desirability-based strategies to guide model selection. Through our proposed framework and supporting tools, we explore the impact of incorporating XAI consistency into the optimization process. This enables us to characterize distinct regions in the architecture configuration space: one region with poor performance and comparatively low interpretability, another with strong predictive performance but weak interpretability due to low gls{xai} consistency, and a trade-off region that balances both objectives by offering high interpretability alongside competitive performance. Beyond introducing this novel approach, our research provides a foundation for future investigations into whether models from the trade-off zone-balancing performance loss and XAI consistency-exhibit greater robustness by avoiding overfitting to training performance, thereby leading to more reliable predictions on out-of-distribution data.