XAI-Units: Benchmarking Explainability Methods with Unit Tests

📅 2025-06-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing feature attribution (FA) methods exhibit inconsistent explanations for black-box models, and their evaluation is hampered by the absence of ground-truth attributions. To address this, we propose XAI-Units: the first unit-testing–style benchmark for FA methods. It programmatically generates interpretable synthetic models—endowed with known mechanisms such as feature interactions, cancellation effects, and discontinuities—paired with corresponding datasets, enabling objective, reproducible evaluation without ground-truth labels. Inspired by software engineering practices, XAI-Units introduces a standardized evaluation framework comprising multidimensional metrics—including faithfulness, sensitivity, and consistency. Extensive experiments reveal systematic performance disparities among state-of-the-art FA methods across distinct reasoning patterns (e.g., linear vs. nonlinear, local vs. global). These findings provide empirically grounded, interpretable, and verifiable guidance for selecting appropriate XAI methods in practice.

Technology Category

Application Category

📝 Abstract
Feature attribution (FA) methods are widely used in explainable AI (XAI) to help users understand how the inputs of a machine learning model contribute to its outputs. However, different FA models often provide disagreeing importance scores for the same model. In the absence of ground truth or in-depth knowledge about the inner workings of the model, it is often difficult to meaningfully determine which of the different FA methods produce more suitable explanations in different contexts. As a step towards addressing this issue, we introduce the open-source XAI-Units benchmark, specifically designed to evaluate FA methods against diverse types of model behaviours, such as feature interactions, cancellations, and discontinuous outputs. Our benchmark provides a set of paired datasets and models with known internal mechanisms, establishing clear expectations for desirable attribution scores. Accompanied by a suite of built-in evaluation metrics, XAI-Units streamlines systematic experimentation and reveals how FA methods perform against distinct, atomic kinds of model reasoning, similar to unit tests in software engineering. Crucially, by using procedurally generated models tied to synthetic datasets, we pave the way towards an objective and reliable comparison of FA methods.
Problem

Research questions and friction points this paper is trying to address.

Evaluating disagreement among feature attribution methods in XAI
Lack of ground truth for comparing explanation methods effectively
Need for standardized benchmarks to assess FA method performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Open-source benchmark for FA methods
Paired datasets with known mechanisms
Procedurally generated synthetic datasets
🔎 Similar Papers
No similar papers found.
J
Jun Rui Lee
Department of Computing, Imperial College London
S
Sadegh Emami
Department of Computing, Imperial College London
M
Michael David Hollins
Department of Computing, Imperial College London
T
Timothy C. H. Wong
Department of Computing, Imperial College London
C
Carlos Ignacio Villalobos S'anchez
Department of Computing, Imperial College London
Francesca Toni
Francesca Toni
Imperial College London
Artificial Intelligence
Dekai Zhang
Dekai Zhang
Imperial College London
Machine learning
Adam Dejl
Adam Dejl
Imperial College London
Machine LearningExplainable AINatural Language Processing