MiMu: Mitigating Multiple Shortcut Learning Behavior of Transformers

πŸ“… 2025-04-14
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Transformer models trained via Empirical Risk Minimization (ERM) are prone to exploiting multiple unknown spurious correlations (β€œshortcuts”), particularly over-relying on strong shortcuts, which severely degrades out-of-distribution robustness and generalization. To address this, we propose a two-stage self-calibration and self-improvement framework: Stage I calibrates the source-domain model via unsupervised attention alignment and random masking; Stage II disentangles multi-shortcut representations on the target domain to mitigate systematic dependence on both strong and weak shortcuts. To our knowledge, this is the first approach that systematically tackles learning under multiple unknown shortcuts without requiring auxiliary supervision or shortcut priors. Extensive experiments across NLP and CV benchmarks demonstrate significant improvements in robust generalization, consistently outperforming state-of-the-art methods designed for single-shortcut mitigation.

Technology Category

Application Category

πŸ“ Abstract
Empirical Risk Minimization (ERM) models often rely on spurious correlations between features and labels during the learning process, leading to shortcut learning behavior that undermines robustness generalization performance. Current research mainly targets identifying or mitigating a single shortcut; however, in real-world scenarios, cues within the data are diverse and unknown. In empirical studies, we reveal that the models rely to varying extents on different shortcuts. Compared to weak shortcuts, models depend more heavily on strong shortcuts, resulting in their poor generalization ability. To address these challenges, we propose MiMu, a novel method integrated with Transformer-based ERMs designed to Mitigate Multiple shortcut learning behavior, which incorporates self-calibration strategy and self-improvement strategy. In the source model, we preliminarily propose the self-calibration strategy to prevent the model from relying on shortcuts and make overconfident predictions. Then, we further design self-improvement strategy in target model to reduce the reliance on multiple shortcuts. The random mask strategy involves randomly masking partial attention positions to diversify the focus of target model other than concentrating on a fixed region. Meanwhile, the adaptive attention alignment module facilitates the alignment of attention weights to the calibrated source model, without the need for post-hoc attention maps or supervision. Finally, extensive experiments conducted on Natural Language Processing (NLP) and Computer Vision (CV) demonstrate the effectiveness of MiMu in improving robustness generalization abilities.
Problem

Research questions and friction points this paper is trying to address.

Mitigating multiple shortcut learning in Transformers
Reducing reliance on spurious feature-label correlations
Improving model robustness and generalization performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Self-calibration strategy prevents shortcut reliance
Self-improvement strategy reduces multiple shortcut dependencies
Random mask diversifies attention focus in model
πŸ”Ž Similar Papers
2024-10-03arXiv.orgCitations: 0
L
Lili Zhao
State Key Laboratory of Cognitive Intelligence, University of Science and Technology of China, Hefei 230026, China
Q
Qi Liu
State Key Laboratory of Cognitive Intelligence, University of Science and Technology of China, Hefei 230026, China
W
Wei Chen
State Key Laboratory of Cognitive Intelligence, University of Science and Technology of China, Hefei 230026, China
Liyi Chen
Liyi Chen
PhD at PolyU, HK
R
Ruijun Sun
State Key Laboratory of Cognitive Intelligence, University of Science and Technology of China, Hefei 230026, China
Min Hou
Min Hou
Hefei University of Technology
Y
Yang Wang
iFLYTEK AI Research (Central China), iFLYTEK Co., Ltd, Hefei 230088, China
Shijin Wang
Shijin Wang
Tongji University
Schedulingmaintenance