PDMP: Rethinking Balanced Multimodal Learning via Performance-Dominant Modality Prioritization

📅 2026-04-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the under-optimization problem in multimodal learning, where improper fusion often leads to performance inferior to that of unimodal models. To tackle this issue, the authors propose a Performance-Dominant Modality Priority (PDMP) strategy that identifies the modality with superior performance as the dominant one and assigns it higher gradient weights during training, thereby enabling asymmetric optimization. Departing from conventional balanced learning paradigms, PDMP uniquely centers multimodal learning around the performance-dominant modality without relying on specific model architectures or fusion mechanisms. Extensive experiments demonstrate that PDMP consistently outperforms existing methods across multiple benchmark datasets, effectively mitigating the under-optimization challenge in multimodal settings.
📝 Abstract
Multimodal learning has attracted increasing attention due to its practicality. However, it often suffers from insufficient optimization, where the multimodal model underperforms even compared to its unimodal counterparts. Existing methods attribute this problem to the imbalanced learning between modalities and solve it by gradient modulation. This paper argues that balanced learning is not the optimal setting for multimodal learning. On the contrary, imbalanced learning driven by the performance-dominant modality that has superior unimodal performance can contribute to better multimodal performance. And the under-optimization problem is caused by insufficient learning of the performance-dominant modality. To this end, we propose the Performance-Dominant Modality Prioritization (PDMP) strategy to assist multimodal learning. Specifically, PDMP firstly mines the performance-dominant modality via the performance ranking of the independently trained unimodal model. Then PDMP introduces asymmetric coefficients to modulate the gradients of each modality, enabling the performance-dominant modality to dominate the optimization. Since PDMP only relies on the unimodal performance ranking, it is independent of the structures and fusion methods of the multimodal model and has great potential for practical scenarios. Finally, extensive experiments on various datasets validate the superiority of PDMP.
Problem

Research questions and friction points this paper is trying to address.

multimodal learning
under-optimization
performance-dominant modality
modality imbalance
Innovation

Methods, ideas, or system contributions that make the work stand out.

multimodal learning
performance-dominant modality
gradient modulation
asymmetric optimization
modality prioritization
🔎 Similar Papers
No similar papers found.
Shicai Wei
Shicai Wei
University of Electronic Science and Technology of China, UESTC
multimodal learning
Chunbo Luo
Chunbo Luo
Associate Professor in Computer Science, University of Exeter
Signal ProcessingMachine learning
Q
Qiang Zhu
Peng Cheng Laboratory, Shenzhen, China
Y
Yang Luo
School of Information and Communication Engineering, University of Electronic Science and Technology of China, Sichuan, China