🤖 AI Summary
Unsupervised Domain Generalization (UDG) faces challenges due to the absence of both class and domain labels, making it difficult to disentangle semantic content from domain-specific variations. Method: This paper proposes the Minimum Sufficient Semantic Representation (MFSR) framework—the first to introduce the information-theoretic principle of minimum sufficiency into UDG—providing theoretical guarantees for reducing out-of-distribution risk. MFSR employs a learnable model that jointly optimizes unsupervised semantic representation via InfoNCE-based contrastive learning, semantic–variation disentanglement loss, and reconstruction regularization. Contribution/Results: Fully label-free, MFSR achieves state-of-the-art performance on mainstream UDG benchmarks, significantly outperforming existing self-supervised and UDG methods across all settings.
📝 Abstract
The generalization ability of deep learning has been extensively studied in supervised settings, yet it remains less explored in unsupervised scenarios. Recently, the Unsupervised Domain Generalization (UDG) task has been proposed to enhance the generalization of models trained with prevalent unsupervised learning techniques, such as Self-Supervised Learning (SSL). UDG confronts the challenge of distinguishing semantics from variations without category labels. Although some recent methods have employed domain labels to tackle this issue, such domain labels are often unavailable in real-world contexts. In this paper, we address these limitations by formalizing UDG as the task of learning a Minimal Sufficient Semantic Representation: a representation that (i) preserves all semantic information shared across augmented views (sufficiency), and (ii) maximally removes information irrelevant to semantics (minimality). We theoretically ground these objectives from the perspective of information theory, demonstrating that optimizing representations to achieve sufficiency and minimality directly reduces out-of-distribution risk. Practically, we implement this optimization through Minimal-Sufficient UDG (MS-UDG), a learnable model by integrating (a) an InfoNCE-based objective to achieve sufficiency; (b) two complementary components to promote minimality: a novel semantic-variation disentanglement loss and a reconstruction-based mechanism for capturing adequate variation. Empirically, MS-UDG sets a new state-of-the-art on popular unsupervised domain-generalization benchmarks, consistently outperforming existing SSL and UDG methods, without category or domain labels during representation learning.