🤖 AI Summary
Multimodal joint training often suffers from imbalanced information fusion due to modality competition. To address this, we propose a two-stage framework: first, a pretraining phase that actively shapes modal initial states to mitigate optimization imbalance among modalities. Our core innovation is the formal definition of “effective competition intensity” and the first introduction of FastPID—a differentiable, efficient, fine-grained metric grounded in mutual information and partial information decomposition—to quantify modality uniqueness, redundancy, and synergy. We further design an asynchronous controller to dynamically balance these three components. The method requires no modification to backbone architectures and exhibits strong compatibility. Evaluated on multiple multimodal benchmarks, it achieves state-of-the-art performance, significantly accelerating convergence and improving generalization. This work establishes a novel paradigm for tackling modality competition by explicitly designing initial modal states—shifting focus from post-hoc fusion refinement to principled initialization.
📝 Abstract
Multi-modal fusion often suffers from modality competition during joint training, where one modality dominates the learning process, leaving others under-optimized. Overlooking the critical impact of the model's initial state, most existing methods address this issue during the joint learning stage. In this study, we introduce a two-stage training framework to shape the initial states through unimodal training before the joint training. First, we propose the concept of Effective Competitive Strength (ECS) to quantify a modality's competitive strength. Our theoretical analysis further reveals that properly shaping the initial ECS by unimodal training achieves a provably tighter error bound. However, ECS is computationally intractable in deep neural networks. To bridge this gap, we develop a framework comprising two core components: a fine-grained computable diagnostic metric and an asynchronous training controller. For the metric, we first prove that mutual information(MI) is a principled proxy for ECS. Considering MI is induced by per-modality marginals and thus treats each modality in isolation, we further propose FastPID, a computationally efficient and differentiable solver for partial information decomposition, which decomposes the joint distribution's information into fine-grained measurements: modality-specific uniqueness, redundancy, and synergy. Guided by these measurements, our asynchronous controller dynamically balances modalities by monitoring uniqueness and locates the ideal initial state to start joint training by tracking peak synergy. Experiments on diverse benchmarks demonstrate that our method achieves state-of-the-art performance. Our work establishes that shaping the pre-fusion models' initial state is a powerful strategy that eases competition before it starts, reliably unlocking synergistic multi-modal fusion.