🤖 AI Summary
This work addresses the challenge in multimodal domain generalization where disparate optimization speeds across modalities lead to imbalanced gradient contributions, causing certain modalities to dominate training and degrading generalization to unseen domains. To mitigate this, the authors propose Gradient Modulation Projection (GMP), a novel approach that decouples gradients from classification and domain-invariance objectives, dynamically modulates per-modality gradients using semantic and domain confidence estimates, and incorporates an adaptive gradient projection mechanism to alleviate inter-task conflicts. Departing from conventional strategies that rely solely on source-domain performance for balance, GMP uniquely integrates joint confidence guidance with dynamic gradient coordination into the multimodal optimization framework. Experiments demonstrate that GMP consistently enhances generalization across multiple benchmarks and can be flexibly integrated into existing MMDG methods.
📝 Abstract
Multimodal Domain Generalization (MMDG) leverages the complementary strengths of multiple modalities to enhance model generalization on unseen domains. A central challenge in multimodal learning is optimization imbalance, where modalities converge at different speeds during training. This imbalance leads to unequal gradient contributions, allowing some modalities to dominate the learning process while others lag behind. Existing balancing strategies typically regulate each modality's gradient contribution based on its classification performance on the source domain to alleviate this issue. However, relying solely on source-domain accuracy neglects a key insight in MMDG: modalities that excel on the source domain may generalize poorly to unseen domains, limiting cross-domain gains. To overcome this limitation, we propose Gradient Modulation Projection (GMP), a unified strategy that promotes balanced optimization in MMDG. GMP first decouples gradients associated with classification and domain-invariance objectives. It then modulates each modality's gradient based on semantic and domain confidence. Moreover, GMP dynamically adjusts gradient projections by tracking the relative strength of each task, mitigating conflicts between classification and domain-invariant learning within modality-specific encoders. Extensive experiments demonstrate that GMP achieves state-of-the-art performance and integrates flexibly with diverse MMDG methods, significantly improving generalization across multiple benchmarks.