🤖 AI Summary
This work addresses the poor stability and high noise levels of gradient-based saliency maps—such as Vanilla Gradient and Integrated Gradients—which hinder their reliability in high-stakes applications. By analyzing curvature during training, the study reveals an intrinsic link between saliency map stability and the local smoothness of the input gradient field. It proposes a novel approach that integrates adversarial training with lightweight differentiable Gaussian smoothing, effectively mitigating the explanation instability induced by adversarial training while preserving its beneficial sparsity properties. Experiments on FMNIST, CIFAR-10, and ImageNette demonstrate substantial improvements in both stability and sparsity of the generated saliency maps. A user study involving 65 participants further confirms that the resulting explanations are not only more faithful but also perceived as significantly more trustworthy.
📝 Abstract
Gradient-based saliency methods such as Vanilla Gradient (VG) and Integrated Gradients (IG) are widely used to explain image classifiers, yet the resulting maps are often noisy and unstable, limiting their usefulness in high-stakes settings. Most prior work improves explanations by modifying the attribution algorithm, leaving open how the training procedure shapes explanation quality. We take a training-centered view and first provide a curvature-based analysis linking attribution stability to how smoothly the input-gradient field varies locally. Guided by this connection, we study adversarial training and identify a consistent trade-off: it yields sparser and more input-stable saliency maps, but can degrade output-side stability, causing explanations to change even when predictions remain unchanged and logits vary only slightly. To mitigate this, we propose augmenting adversarial training with a lightweight feature-map smoothing block that applies a differentiable Gaussian filter in an intermediate layer. Across FMNIST, CIFAR-10, and ImageNette, our method preserves the sparsity benefits of adversarial training while improving both input-side stability and output-side stability. A human study with 65 participants further shows that smoothed adversarial saliency maps are perceived as more sufficient and trustworthy. Overall, our results demonstrate that explanation quality is critically shaped by training, and that simple smoothing with robust training provides a practical path toward saliency maps that are both sparse and stable.