Bridging Domain Generalization to Multimodal Domain Generalization via Unified Representations

📅 2025-07-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing single-modal domain generalization (DG) methods fail in multimodal domain generalization (MMDG) due to cross-modal inconsistency among heterogeneous modalities. Method: We propose the first unified representation and supervised disentanglement framework for MMDG, which maps multimodal inputs into a shared feature space and jointly enforces modality-invariance constraints and task-relevant supervision to disentangle domain-specific and semantically shared components. Contribution/Results: Our approach systematically extends single-modal DG paradigms to multimodal settings for the first time. Evaluated on EPIC-Kitchens and Human-Animal-Cartoon benchmarks, it achieves significant improvements in generalization to unseen target domains, setting new state-of-the-art performance. These results empirically validate its effectiveness in modeling cross-modal consistency and enhancing robustness under distribution shifts.

Technology Category

Application Category

📝 Abstract
Domain Generalization (DG) aims to enhance model robustness in unseen or distributionally shifted target domains through training exclusively on source domains. Although existing DG techniques, such as data manipulation, learning strategies, and representation learning, have shown significant progress, they predominantly address single-modal data. With the emergence of numerous multi-modal datasets and increasing demand for multi-modal tasks, a key challenge in Multi-modal Domain Generalization (MMDG) has emerged: enabling models trained on multi-modal sources to generalize to unseen target distributions within the same modality set. Due to the inherent differences between modalities, directly transferring methods from single-modal DG to MMDG typically yields sub-optimal results. These methods often exhibit randomness during generalization due to the invisibility of target domains and fail to consider inter-modal consistency. Applying these methods independently to each modality in the MMDG setting before combining them can lead to divergent generalization directions across different modalities, resulting in degraded generalization capabilities. To address these challenges, we propose a novel approach that leverages Unified Representations to map different paired modalities together, effectively adapting DG methods to MMDG by enabling synchronized multi-modal improvements within the unified space. Additionally, we introduce a supervised disentanglement framework that separates modal-general and modal-specific information, further enhancing the alignment of unified representations. Extensive experiments on benchmark datasets, including EPIC-Kitchens and Human-Animal-Cartoon, demonstrate the effectiveness and superiority of our method in enhancing multi-modal domain generalization.
Problem

Research questions and friction points this paper is trying to address.

Extending single-modal domain generalization to multi-modal scenarios
Addressing modality differences and inter-modal consistency in MMDG
Enhancing generalization via unified representations and disentanglement
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unified Representations for multi-modal mapping
Supervised disentanglement of modal information
Synchronized improvements in unified space
🔎 Similar Papers
No similar papers found.