🤖 AI Summary
This work addresses the shortcut learning problem—where models rely on spurious, easily learnable but unreliable cues due to dataset biases. We propose DiffDiv, a novel framework that leverages diffusion probabilistic models (DPMs), uncovering for the first time their unsupervised feature disentanglement capability during mid-training stages to generate counterfactual samples with novel feature compositions. DiffDiv then integrates diversity regularization with disagreement-based ensemble learning to encourage robust feature acquisition. Crucially, it requires no additional annotations, auxiliary data, or explicit causal assumptions—only unsupervised diversity optimization suffices to mitigate shortcut dependencies. Extensive experiments across multiple benchmarks demonstrate that DiffDiv significantly improves model generalization and robustness; its diversity and performance match or surpass those of state-of-the-art methods relying on auxiliary data.
📝 Abstract
Spurious correlations in the data, where multiple cues are predictive of the target labels, often lead to a phenomenon known as shortcut learning, where a model relies on erroneous, easy-to-learn cues while ignoring reliable ones. In this work, we propose DiffDiv an ensemble diversification framework exploiting Diffusion Probabilistic Models (DPMs) to mitigate this form of bias. We show that at particular training intervals, DPMs can generate images with novel feature combinations, even when trained on samples displaying correlated input features. We leverage this crucial property to generate synthetic counterfactuals to increase model diversity via ensemble disagreement. We show that DPM-guided diversification is sufficient to remove dependence on shortcut cues, without a need for additional supervised signals. We further empirically quantify its efficacy on several diversification objectives, and finally show improved generalization and diversification on par with prior work that relies on auxiliary data collection.