Let Samples Speak: Mitigating Spurious Correlation by Exploiting the Clusterness of Samples

๐Ÿ“… 2025-12-28
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Deep learning models often fail to generalize due to spurious correlationsโ€”non-causal statistical associations between features and labels. Existing debiasing methods rely either on manually annotated bias attributes or strong prior assumptions (e.g., bias simplicity), limiting their applicability to real-world data with complex, latent spurious patterns. To address this, we propose a fully data-driven debiasing framework: first, we automatically identify spurious features by measuring clustering dispersion of samples in the feature space; second, we introduce a grouping-based neutralization strategy and a contrastive bias-invariant feature alignment transformation; finally, we jointly optimize the classifier and representation learner in an end-to-end manner. Our approach requires no bias annotations or restrictive assumptions. On standard image and NLP debiasing benchmarks, it improves worst-group accuracy by over 20% relative to Empirical Risk Minimization (ERM), significantly enhancing model robustness and fairness.

Technology Category

Application Category

๐Ÿ“ Abstract
Deep learning models are known to often learn features that spuriously correlate with the class label during training but are irrelevant to the prediction task. Existing methods typically address this issue by annotating potential spurious attributes, or filtering spurious features based on some empirical assumptions (e.g., simplicity of bias). However, these methods may yield unsatisfactory performance due to the intricate and elusive nature of spurious correlations in real-world data. In this paper, we propose a data-oriented approach to mitigate the spurious correlation in deep learning models. We observe that samples that are influenced by spurious features tend to exhibit a dispersed distribution in the learned feature space. This allows us to identify the presence of spurious features. Subsequently, we obtain a bias-invariant representation by neutralizing the spurious features based on a simple grouping strategy. Then, we learn a feature transformation to eliminate the spurious features by aligning with this bias-invariant representation. Finally, we update the classifier by incorporating the learned feature transformation and obtain an unbiased model. By integrating the aforementioned identifying, neutralizing, eliminating and updating procedures, we build an effective pipeline for mitigating spurious correlation. Experiments on image and NLP debiasing benchmarks show an improvement in worst group accuracy of more than 20% compared to standard empirical risk minimization (ERM). Codes and checkpoints are available at https://github.com/davelee-uestc/nsf_debiasing .
Problem

Research questions and friction points this paper is trying to address.

Mitigates spurious correlations in deep learning models
Identifies spurious features via sample distribution analysis
Learns bias-invariant representations to improve worst-group accuracy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Identifying spurious features via sample dispersion analysis
Neutralizing bias through simple grouping strategy
Eliminating spurious features via feature transformation alignment
๐Ÿ”Ž Similar Papers
No similar papers found.
Weiwei Li
Weiwei Li
Beijing University of Chemical Technology
Organic PhotovoltiacsOrganic Solar CellsConjugated Polymers
Junzhuo Liu
Junzhuo Liu
University of Electronic Science and Technology of China
Y
Yuanyuan Ren
Shihezi University
Y
Yuchen Zheng
Shihezi University
Y
Yahao Liu
University of Electronic Science and Technology of China
W
Wen Li
University of Electronic Science and Technology of China