🤖 AI Summary
This work addresses the challenge in multimodal learning where imbalanced missing rates across modalities often cause information-rich modalities to dominate optimization, thereby diminishing the contribution of weaker modalities and disrupting both representation learning and gradient dynamics. To tackle this issue, the authors propose BALM, a model-agnostic and plug-and-play framework that, for the first time, approaches the problem from the perspective of the training process. BALM introduces a Feature Calibration Module (FCM) to establish a shared representational basis across diverse missing patterns and a Gradient Rebalancing Module (GRM) to jointly modulate gradient magnitude and direction in both distributional and spatial dimensions. Extensive experiments on multiple multimodal sentiment recognition benchmarks demonstrate that BALM significantly enhances model robustness and performance under various missingness and imbalance scenarios.
📝 Abstract
Learning from multiple modalities often suffers from imbalance, where information-rich modalities dominate optimization while weaker or partially missing modalities contribute less. This imbalance becomes severe in realistic settings with imbalanced missing rates (IMR), where each modality is absent with different probabilities, distorting representation learning and gradient dynamics. We revisit this issue from a training-process perspective and propose BALM, a model-agnostic plug-in framework to achieve balanced multimodal learning under IMR. The framework comprises two complementary modules: the Feature Calibration Module (FCM), which recalibrates unimodal features using global context to establish a shared representation basis across heterogeneous missing patterns; the Gradient Rebalancing Module (GRM), which balances learning dynamics across modalities by modulating gradient magnitudes and directions from both distributional and spatial perspectives. BALM can be seamlessly integrated into diverse backbones, including multimodal emotion recognition (MER) models, without altering their architectures. Experimental results across multiple MER benchmarks confirm that BALM consistently enhances robustness and improves performance under diverse missing and imbalance settings. Code available at: https://github.com/np4s/BALM_CVPR2026.git