A Meaningful Perturbation Metric for Evaluating Explainability Methods

📅 2025-04-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
How can the effectiveness of deep neural network attribution methods be objectively evaluated? This paper proposes a perturbation-based evaluation framework leveraging generative image inpainting, wherein pre-trained diffusion models perform semantically coherent and in-distribution replacements of high-saliency regions identified by attribution maps, thereby quantifying how effectively different explanation methods alter model predictions. Our approach is the first to integrate generative models into attribution evaluation, overcoming key limitations of conventional occlusion- or blurring-based perturbations—namely semantic distortion and out-of-distribution bias—via attribution-driven conditional inpainting and saliency-guided masking. Evaluated across multiple models and attribution methods, our framework achieves an average 37% improvement in Spearman correlation with human judgment and attains a consistency score of 0.82 with human explanation preferences, significantly outperforming existing evaluation paradigms.

Technology Category

Application Category

📝 Abstract
Deep neural networks (DNNs) have demonstrated remarkable success, yet their wide adoption is often hindered by their opaque decision-making. To address this, attribution methods have been proposed to assign relevance values to each part of the input. However, different methods often produce entirely different relevance maps, necessitating the development of standardized metrics to evaluate them. Typically, such evaluation is performed through perturbation, wherein high- or low-relevance regions of the input image are manipulated to examine the change in prediction. In this work, we introduce a novel approach, which harnesses image generation models to perform targeted perturbation. Specifically, we focus on inpainting only the high-relevance pixels of an input image to modify the model's predictions while preserving image fidelity. This is in contrast to existing approaches, which often produce out-of-distribution modifications, leading to unreliable results. Through extensive experiments, we demonstrate the effectiveness of our approach in generating meaningful rankings across a wide range of models and attribution methods. Crucially, we establish that the ranking produced by our metric exhibits significantly higher correlation with human preferences compared to existing approaches, underscoring its potential for enhancing interpretability in DNNs.
Problem

Research questions and friction points this paper is trying to address.

Evaluating explainability methods for opaque DNN decisions
Standardizing metrics to compare differing relevance maps
Improving perturbation techniques to maintain image fidelity
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses image generation for targeted perturbation
Inpaints high-relevance pixels to preserve fidelity
Produces human-correlated rankings for explainability
🔎 Similar Papers
No similar papers found.