๐ค AI Summary
Deep learning models often fail to generalize due to spurious correlationsโnon-causal statistical associations between features and labels. Existing debiasing methods rely either on manually annotated bias attributes or strong prior assumptions (e.g., bias simplicity), limiting their applicability to real-world data with complex, latent spurious patterns. To address this, we propose a fully data-driven debiasing framework: first, we automatically identify spurious features by measuring clustering dispersion of samples in the feature space; second, we introduce a grouping-based neutralization strategy and a contrastive bias-invariant feature alignment transformation; finally, we jointly optimize the classifier and representation learner in an end-to-end manner. Our approach requires no bias annotations or restrictive assumptions. On standard image and NLP debiasing benchmarks, it improves worst-group accuracy by over 20% relative to Empirical Risk Minimization (ERM), significantly enhancing model robustness and fairness.
๐ Abstract
Deep learning models are known to often learn features that spuriously correlate with the class label during training but are irrelevant to the prediction task. Existing methods typically address this issue by annotating potential spurious attributes, or filtering spurious features based on some empirical assumptions (e.g., simplicity of bias). However, these methods may yield unsatisfactory performance due to the intricate and elusive nature of spurious correlations in real-world data. In this paper, we propose a data-oriented approach to mitigate the spurious correlation in deep learning models. We observe that samples that are influenced by spurious features tend to exhibit a dispersed distribution in the learned feature space. This allows us to identify the presence of spurious features. Subsequently, we obtain a bias-invariant representation by neutralizing the spurious features based on a simple grouping strategy. Then, we learn a feature transformation to eliminate the spurious features by aligning with this bias-invariant representation. Finally, we update the classifier by incorporating the learned feature transformation and obtain an unbiased model. By integrating the aforementioned identifying, neutralizing, eliminating and updating procedures, we build an effective pipeline for mitigating spurious correlation. Experiments on image and NLP debiasing benchmarks show an improvement in worst group accuracy of more than 20% compared to standard empirical risk minimization (ERM). Codes and checkpoints are available at https://github.com/davelee-uestc/nsf_debiasing .