🤖 AI Summary
Gradient-based explanations for ReLU networks suffer from high-frequency noise due to activation discontinuities, forcing post-hoc methods like GradCAM to trade off smoothness against faithfulness. Method: We propose the first unified framework grounded in spectral analysis to quantitatively characterize the contribution of high-frequency components in gradient explanations to model decisions. We formally define and measure the “explanation gap”—the systematic distortion introduced by smoothing-based surrogate models. Contribution/Results: Through theoretical analysis and extensive experiments across diverse datasets and architectures, we uncover the distortion mechanisms inherent in mainstream explanation methods and derive principled guidelines for high-frequency regularization to enhance explanation fidelity. Our work establishes a quantifiable, empirically verifiable analytical paradigm for explainable AI, offering concrete pathways for methodological improvement.
📝 Abstract
ReLU networks, while prevalent for visual data, have sharp transitions, sometimes relying on individual pixels for predictions, making vanilla gradient-based explanations noisy and difficult to interpret. Existing methods, such as GradCAM, smooth these explanations by producing surrogate models at the cost of faithfulness. We introduce a unifying spectral framework to systematically analyze and quantify smoothness, faithfulness, and their trade-off in explanations. Using this framework, we quantify and regularize the contribution of ReLU networks to high-frequency information, providing a principled approach to identifying this trade-off. Our analysis characterizes how surrogate-based smoothing distorts explanations, leading to an ``explanation gap'' that we formally define and measure for different post-hoc methods. Finally, we validate our theoretical findings across different design choices, datasets, and ablations.