🤖 AI Summary
Multimodal neural networks often suffer from modality overfitting, leading to imbalanced learning dynamics and hindering the full exploitation of cross-modal synergies. To address this, we propose Modality-Aware Dynamic Learning Rate scheduling (MDLR), the first method to explicitly model inter-modality differences in conditional utilization as a principled basis for learning rate adaptation. MDLR dynamically adjusts per-modality learning rates during training to harmonize learning progress across modalities and mitigate single-modality dominance. It is architecture-agnostic—compatible with joint fusion frameworks—requires no structural modifications to the model, and incurs zero inference overhead. Evaluated on four mainstream multimodal tasks, MDLR consistently outperforms seven state-of-the-art baselines, yielding an average 2.1% improvement in multimodal performance. Moreover, it enhances the generalization capability of unimodal encoders, empirically validating that balanced modality-specific learning is critical for high-quality joint representation learning.
📝 Abstract
The aim of multimodal neural networks is to combine diverse data sources, referred to as modalities, to achieve enhanced performance compared to relying on a single modality. However, training of multimodal networks is typically hindered by modality overfitting, where the network relies excessively on one of the available modalities. This often yields sub-optimal performance, hindering the potential of multimodal learning and resulting in marginal improvements relative to unimodal models. In this work, we present the Modality-Informed Learning ratE Scheduler (MILES) for training multimodal joint fusion models in a balanced manner. MILES leverages the differences in modality-wise conditional utilization rates during training to effectively balance multimodal learning. The learning rate is dynamically adjusted during training to balance the speed of learning from each modality by the multimodal model, aiming for enhanced performance in both multimodal and unimodal predictions. We extensively evaluate MILES on four multimodal joint fusion tasks and compare its performance to seven state-of-the-art baselines. Our results show that MILES outperforms all baselines across all tasks and fusion methods considered in our study, effectively balancing modality usage during training. This results in improved multimodal performance and stronger modality encoders, which can be leveraged when dealing with unimodal samples or absent modalities. Overall, our work highlights the impact of balancing multimodal learning on improving model performance.