The effect of whitening on explanation performance

📅 2026-02-09
📈 Citations: 0
✨ Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the unreliability of feature attribution methods, which often erroneously assign importance to non-informative variables—such as suppressor variables—thereby compromising explanation fidelity. For the first time, it systematically evaluates the impact of five whitening transformations on sixteen mainstream explainable AI (XAI) methods using both the XAI-TRIS synthetic benchmark and a two-dimensional linear classification task. Combining theoretical modeling with empirical analysis, the work demonstrates that data whitening enhances attribution accuracy by removing feature dependencies; however, this improvement is highly contingent upon the specific XAI method and model architecture employed. The findings underscore the critical role of preprocessing in preserving explanation fidelity and offer a novel perspective for advancing the reliability of interpretability techniques.

Technology Category

Application Category

📝 Abstract
Explainable Artificial Intelligence (XAI) aims to provide transparent insights into machine learning models, yet the reliability of many feature attribution methods remains a critical challenge. Prior research (Haufe et al., 2014; Wilming et al., 2022, 2023) has demonstrated that these methods often erroneously assign significant importance to non-informative variables, such as suppressor variables, leading to fundamental misinterpretations. Since statistical suppression is induced by feature dependencies, this study investigates whether data whitening, a common preprocessing technique for decorrelation, can mitigate such errors. Using the established XAI-TRIS benchmark (Clark et al., 2024b), which offers synthetic ground-truth data and quantitative measures of explanation correctness, we empirically evaluate 16 popular feature attribution methods applied in combination with 5 distinct whitening transforms. Additionally, we analyze a minimal linear two-dimensional classification problem (Wilming et al., 2023) to theoretically assess whether whitening can remove the impact of suppressor features from Bayes-optimal models. Our results indicate that, while specific whitening techniques can improve explanation performance, the degree of improvement varies substantially across XAI methods and model architectures. These findings highlight the complex relationship between data non-linearities, preprocessing quality, and attribution fidelity, underscoring the vital role of pre-processing techniques in enhancing model interpretability.
Problem

Research questions and friction points this paper is trying to address.

feature attribution
suppressor variables
explainable AI
data whitening
explanation reliability
Innovation

Methods, ideas, or system contributions that make the work stand out.

whitening
feature attribution
explainable AI
suppressor variables
XAI-TRIS benchmark
🔎 Similar Papers
No similar papers found.
B
Benedict Clark
Physikalisch-Technische Bundesanstalt, Berlin, Germany
S
Stoyan Karastoyanov
Technische Universität Berlin, Germany
R
Rick Wilming
Physikalisch-Technische Bundesanstalt, Berlin, Germany; Technische Universität Berlin, Germany
Stefan Haufe
Stefan Haufe
Technische Universität Berlin, Physikalisch-Technische Bundesanstalt, CharitÊ - Universitätsmedizin
Machine LearningSignal ProcessingNeuroimagingBrain ConnectivityAI in Biomedicine