Model Guidance via Robust Feature Attribution

📅 2025-06-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Models often rely on spurious shortcut features, undermining robustness and generalization. This paper proposes a general training framework that integrates feature saliency explanation with loss-guided optimization. By designing a simplified objective function, it jointly optimizes for explanation robustness and shortcut suppression, with theoretical guarantees for more reliable gradient signals. The method incorporates human- or model-generated feature relevance annotations—embedding robust attribution outputs directly into the training loss for multimodal tasks (e.g., medical imaging and NLP). Experiments demonstrate an average 20% reduction in test misclassification rate across diverse settings, significantly outperforming state-of-the-art methods. Notably, this work provides the first empirical validation that annotation *quality* is more critical than annotation *quantity*. The implementation is publicly available.

Technology Category

Application Category

📝 Abstract
Controlling the patterns a model learns is essential to preventing reliance on irrelevant or misleading features. Such reliance on irrelevant features, often called shortcut features, has been observed across domains, including medical imaging and natural language processing, where it may lead to real-world harms. A common mitigation strategy leverages annotations (provided by humans or machines) indicating which features are relevant or irrelevant. These annotations are compared to model explanations, typically in the form of feature salience, and used to guide the loss function during training. Unfortunately, recent works have demonstrated that feature salience methods are unreliable and therefore offer a poor signal to optimize. In this work, we propose a simplified objective that simultaneously optimizes for explanation robustness and mitigation of shortcut learning. Unlike prior objectives with similar aims, we demonstrate theoretically why our approach ought to be more effective. Across a comprehensive series of experiments, we show that our approach consistently reduces test-time misclassifications by 20% compared to state-of-the-art methods. We also extend prior experimental settings to include natural language processing tasks. Additionally, we conduct novel ablations that yield practical insights, including the relative importance of annotation quality over quantity. Code for our method and experiments is available at: https://github.com/Mihneaghitu/ModelGuidanceViaRobustFeatureAttribution.
Problem

Research questions and friction points this paper is trying to address.

Prevent model reliance on irrelevant shortcut features
Improve robustness of feature attribution explanations
Reduce test-time misclassifications by 20%
Innovation

Methods, ideas, or system contributions that make the work stand out.

Simplified objective optimizing explanation robustness and shortcut mitigation
Theoretical demonstration of approach effectiveness
20% reduction in test-time misclassifications compared to SOTA
🔎 Similar Papers
No similar papers found.