๐ค AI Summary
To address learning bias caused by dominant modalities suppressing weaker ones and degraded robustness under incomplete modality conditions in multimodal fusion, this paper proposes a Shapley-value-guided adaptive alternating training framework. Methodologically, it introduces: (1) a novel Modality Equilibrium Metric (EDM) that quantitatively measures contribution imbalance across modalities; (2) a weak-modality-prioritized scheduling mechanism grounded in Shapley value estimation, enabling dynamic and fair modality participation; and (3) a cross-modal memory module with inheritance and mapping capabilities, supporting both feature-level and sample-level alignment. The framework is compatible with dual-path encodersโCNNs and LLMs. Evaluated on four benchmark datasets, it achieves state-of-the-art performance and significantly enhances generalization under modality missing scenarios. Empirical analysis using EDM confirms a strong positive correlation between modality equilibrium and model accuracy.
๐ Abstract
Multimodal fusion is susceptible to modality imbalance, where dominant modalities overshadow weak ones, easily leading to biased learning and suboptimal fusion, especially for incomplete modality conditions. To address this problem, we propose a Shapley-guided alternating training framework that adaptively prioritizes minor modalities to balance and thus enhance the fusion. Our method leverages Shapley Value-based scheduling to improve the training sequence adaptively, ensuring that under-optimized modalities receive sufficient learning. Additionally, we introduce the memory module to refine and inherit modality-specific representations with a cross-modal mapping mechanism to align features at both the feature and sample levels. To further validate the adaptability of the proposed approach, the encoder module empirically adopts both conventional and LLM-based backbones. With building up a novel multimodal equilibrium metric, namely, equilibrium deviation metric (EDM), we evaluate the performance in both balance and accuracy across four multimodal benchmark datasets, where our method achieves state-of-the-art (SOTA) results. Meanwhile, robustness analysis under missing modalities highlights its strong generalization capabilities. Accordingly, our findings reveal the untapped potential of alternating training, demonstrating that strategic modality prioritization fundamentally balances and promotes multimodal learning, offering a new paradigm for optimizing multimodal training dynamics.