FixCLR: Negative-Class Contrastive Learning for Semi-Supervised Domain Generalization

📅 2025-06-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Semi-Supervised Domain Generalization (SSDG) aims to enhance model generalization to unseen domains using minimal target-domain labels, yet existing methods suffer from performance degradation due to extreme label scarcity. This paper proposes FixCLR, a novel framework addressing this challenge. Methodologically, it introduces a class-aware negative-sample contrastive learning strategy that explicitly enforces domain-invariant representation learning via negative-class repulsion loss—eliminating the need for conventional regularization terms—thus achieving lightweight and efficient cross-domain feature alignment. Furthermore, FixCLR seamlessly integrates pseudo-labeling with contrastive learning, supporting both pre-trained and non-pre-trained models. Extensive experiments on multiple standard SSDG benchmarks demonstrate that FixCLR consistently outperforms state-of-the-art methods, especially under extreme label scarcity (e.g., ≤3 labeled samples per class). Moreover, it serves as a plug-and-play enhancement module for mainstream semi-supervised and domain generalization approaches.

Technology Category

Application Category

📝 Abstract
Semi-supervised domain generalization (SSDG) aims to solve the problem of generalizing to out-of-distribution data when only a few labels are available. Due to label scarcity, applying domain generalization methods often underperform. Consequently, existing SSDG methods combine semi-supervised learning methods with various regularization terms. However, these methods do not explicitly regularize to learn domains invariant representations across all domains, which is a key goal for domain generalization. To address this, we introduce FixCLR. Inspired by success in self-supervised learning, we change two crucial components to adapt contrastive learning for explicit domain invariance regularization: utilization of class information from pseudo-labels and using only a repelling term. FixCLR can also be added on top of most existing SSDG and semi-supervised methods for complementary performance improvements. Our research includes extensive experiments that have not been previously explored in SSDG studies. These experiments include benchmarking different improvements to semi-supervised methods, evaluating the performance of pretrained versus non-pretrained models, and testing on datasets with many domains. Overall, FixCLR proves to be an effective SSDG method, especially when combined with other semi-supervised methods.
Problem

Research questions and friction points this paper is trying to address.

Addresses semi-supervised domain generalization with limited labels
Improves domain-invariant representation learning across domains
Enhances existing SSDG methods via contrastive learning adaptation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses pseudo-labels for class information
Applies repelling term for domain invariance
Enhances existing SSDG methods additively
🔎 Similar Papers
No similar papers found.