🤖 AI Summary
In multimodal learning, conventional joint loss functions often induce modality imbalance, wherein dominant modalities suppress weaker ones, degrading both unimodal feature extraction and cross-modal interaction. To address this, we propose a unidirectional dynamic interaction framework: (1) decoupling modality optimization via a staged training strategy—first achieving convergence on an anchor modality, then guiding other modalities; (2) replacing symmetric joint losses with an active, asymmetric unidirectional information guidance mechanism; and (3) integrating unsupervised losses with a dynamic interaction adjustment module. Our approach alleviates modality imbalance without increasing model parameters or inference overhead. Extensive experiments demonstrate consistent superiority over state-of-the-art methods across multiple benchmark tasks, validating its effectiveness in enhancing feature disentanglement, interaction quality, and overall performance.
📝 Abstract
Multimodal learning typically utilizes multimodal joint loss to integrate different modalities and enhance model performance. However, this joint learning strategy can induce modality imbalance, where strong modalities overwhelm weaker ones and limit exploitation of individual information from each modality and the inter-modality interaction information.Existing strategies such as dynamic loss weighting, auxiliary objectives and gradient modulation mitigate modality imbalance based on joint loss. These methods remain fundamentally reactive, detecting and correcting imbalance after it arises, while leaving the competitive nature of the joint loss untouched. This limitation drives us to explore a new strategy for multimodal imbalance learning that does not rely on the joint loss, enabling more effective interactions between modalities and better utilization of information from individual modalities and their interactions. In this paper, we introduce Unidirectional Dynamic Interaction (UDI), a novel strategy that abandons the conventional joint loss in favor of a proactive, sequential training scheme. UDI first trains the anchor modality to convergence, then uses its learned representations to guide the other modality via unsupervised loss. Furthermore, the dynamic adjustment of modality interactions allows the model to adapt to the task at hand, ensuring that each modality contributes optimally. By decoupling modality optimization and enabling directed information flow, UDI prevents domination by any single modality and fosters effective cross-modal feature learning. Our experimental results demonstrate that UDI outperforms existing methods in handling modality imbalance, leading to performance improvement in multimodal learning tasks.