🤖 AI Summary
Existing feature attribution (FA) methods exhibit inconsistent explanations for black-box models, and their evaluation is hampered by the absence of ground-truth attributions. To address this, we propose XAI-Units: the first unit-testing–style benchmark for FA methods. It programmatically generates interpretable synthetic models—endowed with known mechanisms such as feature interactions, cancellation effects, and discontinuities—paired with corresponding datasets, enabling objective, reproducible evaluation without ground-truth labels. Inspired by software engineering practices, XAI-Units introduces a standardized evaluation framework comprising multidimensional metrics—including faithfulness, sensitivity, and consistency. Extensive experiments reveal systematic performance disparities among state-of-the-art FA methods across distinct reasoning patterns (e.g., linear vs. nonlinear, local vs. global). These findings provide empirically grounded, interpretable, and verifiable guidance for selecting appropriate XAI methods in practice.
📝 Abstract
Feature attribution (FA) methods are widely used in explainable AI (XAI) to help users understand how the inputs of a machine learning model contribute to its outputs. However, different FA models often provide disagreeing importance scores for the same model. In the absence of ground truth or in-depth knowledge about the inner workings of the model, it is often difficult to meaningfully determine which of the different FA methods produce more suitable explanations in different contexts. As a step towards addressing this issue, we introduce the open-source XAI-Units benchmark, specifically designed to evaluate FA methods against diverse types of model behaviours, such as feature interactions, cancellations, and discontinuous outputs. Our benchmark provides a set of paired datasets and models with known internal mechanisms, establishing clear expectations for desirable attribution scores. Accompanied by a suite of built-in evaluation metrics, XAI-Units streamlines systematic experimentation and reveals how FA methods perform against distinct, atomic kinds of model reasoning, similar to unit tests in software engineering. Crucially, by using procedurally generated models tied to synthetic datasets, we pave the way towards an objective and reliable comparison of FA methods.