Axiomatic Explainer Globalness via Optimal Transport

📅 2024-11-02
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing explanation methods lack a quantifiable assessment of explanation diversity—termed “globalness”—hindering rigorous cross-method comparison. Method: This paper introduces the first axiomatic definition of globalness for explanation methods, proposing four fundamental axioms and proving that Wasserstein Globalness is the unique metric satisfying all of them. Built upon optimal transport theory, this metric ensures both theoretical soundness and computational tractability. The approach integrates feature attribution analysis and is empirically validated across diverse modalities—including images, tabular data, and synthetic datasets. Contribution/Results: Experiments demonstrate that Wasserstein Globalness significantly improves comparability and selection accuracy across explanation methods. It constitutes the first standardized evaluation metric for explainable AI backed by rigorous mathematical guarantees, enabling principled, theory-grounded assessment of explanation diversity.

Technology Category

Application Category

📝 Abstract
Explainability methods are often challenging to evaluate and compare. With a multitude of explainers available, practitioners must often compare and select explainers based on quantitative evaluation metrics. One particular differentiator between explainers is the diversity of explanations for a given dataset; i.e. whether all explanations are identical, unique and uniformly distributed, or somewhere between these two extremes. In this work, we define a complexity measure for explainers, globalness, which enables deeper understanding of the distribution of explanations produced by feature attribution and feature selection methods for a given dataset. We establish the axiomatic properties that any such measure should possess and prove that our proposed measure, Wasserstein Globalness, meets these criteria. We validate the utility of Wasserstein Globalness using image, tabular, and synthetic datasets, empirically showing that it both facilitates meaningful comparison between explainers and improves the selection process for explainability methods.
Problem

Research questions and friction points this paper is trying to address.

Defining a complexity measure for explainers called globalness.
Establishing axiomatic properties for evaluating explainer diversity.
Validating Wasserstein Globalness for comparing and selecting explainers.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Defines globalness complexity measure for explainers
Uses Wasserstein Globalness for explanation distribution analysis
Validates measure with image, tabular, synthetic datasets
Davin Hill
Davin Hill
Northeastern University
Machine Learning
J
Josh Bone
Northeastern University
A
A. Masoomi
Northeastern University
Max Torop
Max Torop
PhD Student, Northeastern University
Machine Learning
J
J. Dy
Northeastern University