Modality Equilibrium Matters: Minor-Modality-Aware Adaptive Alternating for Cross-Modal Memory Enhancement

๐Ÿ“… 2025-05-26
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
To address learning bias caused by dominant modalities suppressing weaker ones and degraded robustness under incomplete modality conditions in multimodal fusion, this paper proposes a Shapley-value-guided adaptive alternating training framework. Methodologically, it introduces: (1) a novel Modality Equilibrium Metric (EDM) that quantitatively measures contribution imbalance across modalities; (2) a weak-modality-prioritized scheduling mechanism grounded in Shapley value estimation, enabling dynamic and fair modality participation; and (3) a cross-modal memory module with inheritance and mapping capabilities, supporting both feature-level and sample-level alignment. The framework is compatible with dual-path encodersโ€”CNNs and LLMs. Evaluated on four benchmark datasets, it achieves state-of-the-art performance and significantly enhances generalization under modality missing scenarios. Empirical analysis using EDM confirms a strong positive correlation between modality equilibrium and model accuracy.

Technology Category

Application Category

๐Ÿ“ Abstract
Multimodal fusion is susceptible to modality imbalance, where dominant modalities overshadow weak ones, easily leading to biased learning and suboptimal fusion, especially for incomplete modality conditions. To address this problem, we propose a Shapley-guided alternating training framework that adaptively prioritizes minor modalities to balance and thus enhance the fusion. Our method leverages Shapley Value-based scheduling to improve the training sequence adaptively, ensuring that under-optimized modalities receive sufficient learning. Additionally, we introduce the memory module to refine and inherit modality-specific representations with a cross-modal mapping mechanism to align features at both the feature and sample levels. To further validate the adaptability of the proposed approach, the encoder module empirically adopts both conventional and LLM-based backbones. With building up a novel multimodal equilibrium metric, namely, equilibrium deviation metric (EDM), we evaluate the performance in both balance and accuracy across four multimodal benchmark datasets, where our method achieves state-of-the-art (SOTA) results. Meanwhile, robustness analysis under missing modalities highlights its strong generalization capabilities. Accordingly, our findings reveal the untapped potential of alternating training, demonstrating that strategic modality prioritization fundamentally balances and promotes multimodal learning, offering a new paradigm for optimizing multimodal training dynamics.
Problem

Research questions and friction points this paper is trying to address.

Addresses modality imbalance in multimodal fusion
Enhances minor modalities via Shapley-guided training
Improves cross-modal feature alignment and robustness
Innovation

Methods, ideas, or system contributions that make the work stand out.

Shapley-guided alternating training balances modalities
Cross-modal memory module refines representations
Novel equilibrium metric evaluates multimodal balance
๐Ÿ”Ž Similar Papers
No similar papers found.
X
Xiang Shi
School of Information Management, Wuhan University
R
Rui Zhang
School of Information Management, Wuhan University
J
Jiawei Liu
School of Information Management, Wuhan University
Y
Yinpeng Liu
School of Information Management, Wuhan University
Qikai Cheng
Qikai Cheng
Wuhan University
W
Wei Lu
School of Information Management, Wuhan University