🤖 AI Summary
This work addresses the dual challenges of domain shift and label shift in domain generalization, particularly under cross-domain long-tailed distributions where decision boundaries are often dominated by majority classes. To tackle this, we establish the first theoretical generalization bound for imbalanced domain generalization (IDG) and propose a negative-sample-dominated contrastive learning framework. By amplifying gradient signals from negative samples, our approach refines decision boundaries to enhance discriminability for minority classes. Additionally, it integrates reweighted cross-entropy loss with prediction-center alignment to preserve cross-domain posterior consistency. Extensive experiments on multiple challenging benchmarks demonstrate that the proposed method significantly improves generalization performance in imbalanced cross-domain scenarios.
📝 Abstract
Imbalanced Domain Generalization (IDG) focuses on mitigating both domain and label shifts, both of which fundamentally shape the model's decision boundaries, particularly under heterogeneous long-tailed distributions across domains. Despite its practical significance, it remains underexplored, primarily due to the technical complexity of handling their entanglement and the paucity of theoretical foundations. In this paper, we begin by theoretically establishing the generalization bound for IDG, highlighting the role of posterior discrepancy and decision margin. This bound motivates us to focus on directly steering decision boundaries, marking a clear departure from existing methods. Subsequently, we technically propose a novel Negative-Dominant Contrastive Learning (NDCL) for IDG to enhance discriminability while enforce posterior consistency across domains. Specifically, inter-class decision-boundary separation is enhanced by placing greater emphasis on negatives as the primary signal in our contrastive learning, naturally amplifying gradient signals for minority classes to avoid the decision boundary being biased toward majority classes. Meanwhile, intra-class compactness is encouraged through a re-weighted cross-entropy strategy, and posterior consistency across domains is enforced through a prediction-central alignment strategy. Finally, rigorous yet challenging experiments on benchmarks validate the effectiveness of our NDCL. The code is available at https://github.com/Alrash/NDCL.