🤖 AI Summary
Multimodal representation learning is often limited by insufficient modeling of inter-modal collaborative interactions. To address this, we propose InfMasking, a novel representation learning framework for collaborative information enhancement. InfMasking systematically generates a vast number of partial-modality combinations by stochastically masking features from the majority of modalities—retaining only a random subset—thereby explicitly eliciting diverse collaborative interaction patterns. We further introduce a differentiable variational lower-bound loss that approximates and maximizes mutual information between masked and unmasked fused representations, enforcing collaborative alignment. Integrating stochastic feature masking, contrastive learning, and mutual information optimization, InfMasking achieves state-of-the-art performance across seven mainstream multimodal benchmarks. Experimental results demonstrate substantial improvements in modeling the intrinsic collaborative semantics among modalities, validating its effectiveness in capturing modality synergy.
📝 Abstract
In multimodal representation learning, synergistic interactions between modalities not only provide complementary information but also create unique outcomes through specific interaction patterns that no single modality could achieve alone. Existing methods may struggle to effectively capture the full spectrum of synergistic information, leading to suboptimal performance in tasks where such interactions are critical. This is particularly problematic because synergistic information constitutes the fundamental value proposition of multimodal representation. To address this challenge, we introduce InfMasking, a contrastive synergistic information extraction method designed to enhance synergistic information through an extbf{Inf}inite extbf{Masking} strategy. InfMasking stochastically occludes most features from each modality during fusion, preserving only partial information to create representations with varied synergistic patterns. Unmasked fused representations are then aligned with masked ones through mutual information maximization to encode comprehensive synergistic information. This infinite masking strategy enables capturing richer interactions by exposing the model to diverse partial modality combinations during training. As computing mutual information estimates with infinite masking is computationally prohibitive, we derive an InfMasking loss to approximate this calculation. Through controlled experiments, we demonstrate that InfMasking effectively enhances synergistic information between modalities. In evaluations on large-scale real-world datasets, InfMasking achieves state-of-the-art performance across seven benchmarks. Code is released at https://github.com/brightest66/InfMasking.