Training for Trustworthy Saliency Maps: Adversarial Training Meets Feature-Map Smoothing

📅 2026-03-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the poor stability and high noise levels of gradient-based saliency maps—such as Vanilla Gradient and Integrated Gradients—which hinder their reliability in high-stakes applications. By analyzing curvature during training, the study reveals an intrinsic link between saliency map stability and the local smoothness of the input gradient field. It proposes a novel approach that integrates adversarial training with lightweight differentiable Gaussian smoothing, effectively mitigating the explanation instability induced by adversarial training while preserving its beneficial sparsity properties. Experiments on FMNIST, CIFAR-10, and ImageNette demonstrate substantial improvements in both stability and sparsity of the generated saliency maps. A user study involving 65 participants further confirms that the resulting explanations are not only more faithful but also perceived as significantly more trustworthy.

Technology Category

Application Category

📝 Abstract
Gradient-based saliency methods such as Vanilla Gradient (VG) and Integrated Gradients (IG) are widely used to explain image classifiers, yet the resulting maps are often noisy and unstable, limiting their usefulness in high-stakes settings. Most prior work improves explanations by modifying the attribution algorithm, leaving open how the training procedure shapes explanation quality. We take a training-centered view and first provide a curvature-based analysis linking attribution stability to how smoothly the input-gradient field varies locally. Guided by this connection, we study adversarial training and identify a consistent trade-off: it yields sparser and more input-stable saliency maps, but can degrade output-side stability, causing explanations to change even when predictions remain unchanged and logits vary only slightly. To mitigate this, we propose augmenting adversarial training with a lightweight feature-map smoothing block that applies a differentiable Gaussian filter in an intermediate layer. Across FMNIST, CIFAR-10, and ImageNette, our method preserves the sparsity benefits of adversarial training while improving both input-side stability and output-side stability. A human study with 65 participants further shows that smoothed adversarial saliency maps are perceived as more sufficient and trustworthy. Overall, our results demonstrate that explanation quality is critically shaped by training, and that simple smoothing with robust training provides a practical path toward saliency maps that are both sparse and stable.
Problem

Research questions and friction points this paper is trying to address.

saliency maps
attribution stability
adversarial training
input-gradient field
explanation trustworthiness
Innovation

Methods, ideas, or system contributions that make the work stand out.

adversarial training
feature-map smoothing
saliency map stability
gradient-based explanation
trustworthy AI
🔎 Similar Papers
No similar papers found.