Negatives-Dominant Contrastive Learning for Generalization in Imbalanced Domains

📅 2026-01-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the dual challenges of domain shift and label shift in domain generalization, particularly under cross-domain long-tailed distributions where decision boundaries are often dominated by majority classes. To tackle this, we establish the first theoretical generalization bound for imbalanced domain generalization (IDG) and propose a negative-sample-dominated contrastive learning framework. By amplifying gradient signals from negative samples, our approach refines decision boundaries to enhance discriminability for minority classes. Additionally, it integrates reweighted cross-entropy loss with prediction-center alignment to preserve cross-domain posterior consistency. Extensive experiments on multiple challenging benchmarks demonstrate that the proposed method significantly improves generalization performance in imbalanced cross-domain scenarios.

Technology Category

Application Category

📝 Abstract
Imbalanced Domain Generalization (IDG) focuses on mitigating both domain and label shifts, both of which fundamentally shape the model's decision boundaries, particularly under heterogeneous long-tailed distributions across domains. Despite its practical significance, it remains underexplored, primarily due to the technical complexity of handling their entanglement and the paucity of theoretical foundations. In this paper, we begin by theoretically establishing the generalization bound for IDG, highlighting the role of posterior discrepancy and decision margin. This bound motivates us to focus on directly steering decision boundaries, marking a clear departure from existing methods. Subsequently, we technically propose a novel Negative-Dominant Contrastive Learning (NDCL) for IDG to enhance discriminability while enforce posterior consistency across domains. Specifically, inter-class decision-boundary separation is enhanced by placing greater emphasis on negatives as the primary signal in our contrastive learning, naturally amplifying gradient signals for minority classes to avoid the decision boundary being biased toward majority classes. Meanwhile, intra-class compactness is encouraged through a re-weighted cross-entropy strategy, and posterior consistency across domains is enforced through a prediction-central alignment strategy. Finally, rigorous yet challenging experiments on benchmarks validate the effectiveness of our NDCL. The code is available at https://github.com/Alrash/NDCL.
Problem

Research questions and friction points this paper is trying to address.

Imbalanced Domain Generalization
domain shift
label shift
long-tailed distribution
decision boundary
Innovation

Methods, ideas, or system contributions that make the work stand out.

Negative-Dominant Contrastive Learning
Imbalanced Domain Generalization
Decision Boundary Steering
Posterior Consistency
Long-Tailed Distribution
🔎 Similar Papers
No similar papers found.