Feature Modulation for Semi-Supervised Domain Generalization without Domain Labels

📅 2025-03-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses semi-supervised domain generalization (SSDG) in the absence of domain labels. To tackle this challenging setting, we propose a robust learning framework that operates without domain annotations. Our method introduces three key components: (1) a domain-agnostic feature modulation mechanism that constructs class prototypes invariant to domain shifts by leveraging similarity-weighted average representations; (2) a dynamic pseudo-label thresholding strategy, which adaptively lowers confidence thresholds via a loss-scaling function to suppress noise induced by domain shift; and (3) FixMatch-style consistency regularization to enforce prediction stability under perturbations. Evaluated on four standard domain generalization benchmarks, our approach significantly outperforms existing methods, achieving the first performance breakthrough for SSDG under fully domain-label-free conditions.

Technology Category

Application Category

📝 Abstract
Semi-supervised domain generalization (SSDG) leverages a small fraction of labeled data alongside unlabeled data to enhance model generalization. Most of the existing SSDG methods rely on pseudo-labeling (PL) for unlabeled data, often assuming access to domain labels-a privilege not always available. However, domain shifts introduce domain noise, leading to inconsistent PLs that degrade model performance. Methods derived from FixMatch suffer particularly from lower PL accuracy, reducing the effectiveness of unlabeled data. To address this, we tackle the more challenging domain-label agnostic SSDG, where domain labels for unlabeled data are not available during training. First, we propose a feature modulation strategy that enhances class-discriminative features while suppressing domain-specific information. This modulation shifts features toward Similar Average Representations-a modified version of class prototypes-that are robust across domains, encouraging the classifier to distinguish between closely related classes and feature extractor to form tightly clustered, domain-invariant representations. Second, to mitigate domain noise and improve pseudo-label accuracy, we introduce a loss-scaling function that dynamically lowers the fixed confidence threshold for pseudo-labels, optimizing the use of unlabeled data. With these key innovations, our approach achieves significant improvements on four major domain generalization benchmarks-even without domain labels. We will make the code available.
Problem

Research questions and friction points this paper is trying to address.

Enhancing model generalization without domain labels
Reducing domain noise in pseudo-labeling methods
Improving pseudo-label accuracy with dynamic thresholds
Innovation

Methods, ideas, or system contributions that make the work stand out.

Feature modulation suppresses domain-specific information
Dynamic loss-scaling improves pseudo-label accuracy
Similar Average Representations enhance domain robustness
🔎 Similar Papers
No similar papers found.
V
Venuri Amarasinghe
University of Moratuwa
A
Asini Jayakody
University of Moratuwa
I
Isun Randila
University of Moratuwa
K
Kalinga Bandara
University of Moratuwa
C
Chamuditha Jayanga Galappaththige
Queensland University of Technology
Ranga Rodrigo
Ranga Rodrigo
Department of Electronic and Telecommunication Engineering, University of Moratuwa
Computer Vision