Robust Domain Generalization under Divergent Marginal and Conditional Distributions

📅 2026-02-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of compound distribution shift in domain generalization, where both the marginal label distribution \(P(Y)\) and the conditional feature distribution \(P(X|Y)\) may vary simultaneously—a scenario that violates the common assumption of stable label marginals. To tackle this, the authors propose the first framework that jointly models both types of shifts by explicitly decomposing the joint distribution \(P(X,Y)\). They derive a novel risk upper bound that incorporates discrepancies in both marginal and conditional distributions and develop a meta-learning-based optimization strategy to minimize this bound over observed domains. Extensive experiments demonstrate that the proposed method achieves state-of-the-art performance on standard domain generalization benchmarks as well as multi-domain long-tailed recognition tasks, confirming its effectiveness and robustness under compound distribution shifts.

Technology Category

Application Category

📝 Abstract
Domain generalization (DG) aims to learn predictive models that can generalize to unseen domains. Most existing DG approaches focus on learning domain-invariant representations under the assumption of conditional distribution shift (i.e., primarily addressing changes in $P(X\mid Y)$ while assuming $P(Y)$ remains stable). However, real-world scenarios with multiple domains often involve compound distribution shifts where both the marginal label distribution $P(Y)$ and the conditional distribution $P(X\mid Y)$ vary simultaneously. To address this, we propose a unified framework for robust domain generalization under divergent marginal and conditional distributions. We derive a novel risk bound for unseen domains by explicitly decomposing the joint distribution into marginal and conditional components and characterizing risk gaps arising from both sources of divergence. To operationalize this bound, we design a meta-learning procedure that minimizes and validates the proposed risk bound across seen domains, ensuring strong generalization to unseen ones. Empirical evaluations demonstrate that our method achieves state-of-the-art performance not only on conventional DG benchmarks but also in challenging multi-domain long-tailed recognition settings where both marginal and conditional shifts are pronounced.
Problem

Research questions and friction points this paper is trying to address.

Domain Generalization
Marginal Distribution Shift
Conditional Distribution Shift
Distribution Divergence
Robust Generalization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Domain Generalization
Distribution Shift
Marginal Distribution
Conditional Distribution
Meta-learning
Jewon Yeom
Jewon Yeom
Graduate School of Data Science, Seoul National University
Large Language Models
K
Kyubyung Chae
Graduate School of Data Science, Seoul National University, South Korea
H
Hyunggyu Lim
Graduate School of Data Science, Seoul National University, South Korea
Y
Yoonna Oh
SK hynix, South Korea
D
Dongyoon Yang
SK hynix, South Korea
Taesup Kim
Taesup Kim
Assistant Professor, Seoul National University
Representation LearningTransfer LearningAIMachine LearningDeep Learning