🤖 AI Summary
Multimodal information retrieval (MIR) faces two core challenges: modality heterogeneity and difficulty in cross-modal alignment, with existing methods lacking systematic modeling of modality-specific characteristics. This paper proposes UNITE, a general framework emphasizing modality-specific data curation and modality-aware training configuration. It presents the first systematic analysis of how modality attributes affect downstream retrieval performance; introduces modality-aware masked contrastive learning (MAMCL) to mitigate cross-modal instance competition; and integrates modality-aware data cleaning, embedding alignment, and multi-stage weighted training. Evaluated on multiple mainstream MIR benchmarks, UNITE achieves state-of-the-art performance, significantly outperforming prior approaches. These results empirically validate that modality-customized representation learning is critical for robust cross-modal retrieval.
📝 Abstract
Multimodal information retrieval (MIR) faces inherent challenges due to the heterogeneity of data sources and the complexity of cross-modal alignment. While previous studies have identified modal gaps in feature spaces, a systematic approach to address these challenges remains unexplored. In this work, we introduce UNITE, a universal framework that tackles these challenges through two critical yet underexplored aspects: data curation and modality-aware training configurations. Our work provides the first comprehensive analysis of how modality-specific data properties influence downstream task performance across diverse scenarios. Moreover, we propose Modal-Aware Masked Contrastive Learning (MAMCL) to mitigate the competitive relationships among the instances of different modalities. Our framework achieves state-of-the-art results on multiple multimodal retrieval benchmarks, outperforming existing methods by notable margins. Through extensive experiments, we demonstrate that strategic modality curation and tailored training protocols are pivotal for robust cross-modal representation learning. This work not only advances MIR performance but also provides a foundational blueprint for future research in multimodal systems. Our project is available at https://friedrichor.github.io/projects/UNITE.