Guidelines For The Choice Of The Baseline in XAI Attribution Methods

๐Ÿ“… 2025-03-25
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Baseline selection critically affects the reliability, fairness, and interpretability of feature attribution in explainable AI (XAI), yet existing approaches rely heavily on heuristic or domain-specific choices, introducing subjectivity and ambiguity. Method: We propose a decision-boundary-guided automated baseline selection method that identifies baselines by sampling near the modelโ€™s decision boundaryโ€”a theoretically grounded, lightweight, and semantically consistent search domain. Our approach requires no model retraining and is compatible with mainstream attribution algorithms including Integrated Gradients (IG) and Grad-CAM. Contribution/Results: We provide the first systematic theoretical analysis revealing how baseline choice impacts attribution stability and semantic plausibility. Empirical evaluation on synthetic and real-world datasets demonstrates substantial reductions in attribution subjectivity and ambiguity, alongside improved cross-algorithm explanation consistency. An open-source implementation and reproducible guidelines are provided to advance standardization and trustworthy deployment of XAI explanations.

Technology Category

Application Category

๐Ÿ“ Abstract
Given the broad adoption of artificial intelligence, it is essential to provide evidence that AI models are reliable, trustable, and fair. To this end, the emerging field of eXplainable AI develops techniques to probe such requirements, counterbalancing the hype pushing the pervasiveness of this technology. Among the many facets of this issue, this paper focuses on baseline attribution methods, aiming at deriving a feature attribution map at the network input relying on a"neutral"stimulus usually called"baseline". The choice of the baseline is crucial as it determines the explanation of the network behavior. In this framework, this paper has the twofold goal of shedding light on the implications of the choice of the baseline and providing a simple yet effective method for identifying the best baseline for the task. To achieve this, we propose a decision boundary sampling method, since the baseline, by definition, lies on the decision boundary, which naturally becomes the search domain. Experiments are performed on synthetic examples and validated relying on state-of-the-art methods. Despite being limited to the experimental scope, this contribution is relevant as it offers clear guidelines and a simple proxy for baseline selection, reducing ambiguity and enhancing deep models' reliability and trust.
Problem

Research questions and friction points this paper is trying to address.

Examining the impact of baseline choice in XAI attribution methods
Proposing a method to identify optimal baselines for network explanations
Enhancing model reliability by reducing ambiguity in baseline selection
Innovation

Methods, ideas, or system contributions that make the work stand out.

Proposes decision boundary sampling method
Identifies optimal baseline for XAI
Enhances model reliability and trust
๐Ÿ”Ž Similar Papers
No similar papers found.
Cristian Morasso
Cristian Morasso
University of Verona
Artificial IntelligenceExplainable AIMulti Agent Reinforcement LearningMachine Learning
Giorgio Dolci
Giorgio Dolci
University of Verona
I
Ilaria Boscolo Galazzo
Department of Engineering for Innovation Medicine, University of Verona, Verona, Italy
S
Sergey M. Plis
Tri-Institutional Center for Translational Research in Neuroimaging and Data Science, Georgia State University, Georgia Institute of Technology, Emory University
Gloria Menegaz
Gloria Menegaz
Professor of Bioengineering, University of Verona
Neuroimagingexplainable AIdeep learningbrain connectivity