🤖 AI Summary
Existing multimodal retrieval approaches suffer from insufficient fine-grained cross-modal interaction—particularly inadequate fusion of visual and textual cues—due to reliance on late-fusion dual-tower architectures. Method: This paper proposes a single-tower, end-to-end retrieval framework based on bottom-up joint encoding, enabling deep visual–textual integration early in feature extraction. It employs a two-stage MLLM-based training strategy—post-pretraining adaptation followed by instruction tuning—alongside a unified cross-modal encoder to enhance contextual understanding and semantic alignment. Contribution/Results: Extensive experiments demonstrate that the framework consistently outperforms strong baselines across diverse multimodal retrieval tasks; gains are especially pronounced in scenarios demanding tight modality fusion. This work provides the first empirical validation of the effectiveness and necessity of early joint encoding for modeling complex, compositional queries in multimodal retrieval.
📝 Abstract
Information retrieval is indispensable for today's Internet applications, yet traditional semantic matching techniques often fall short in capturing the fine-grained cross-modal interactions required for complex queries. Although late-fusion two-tower architectures attempt to bridge this gap by independently encoding visual and textual data before merging them at a high level, they frequently overlook the subtle interplay essential for comprehensive understanding. In this work, we rigorously assess these limitations and introduce a unified retrieval framework that fuses visual and textual cues from the ground up, enabling early cross-modal interactions for enhancing context interpretation. Through a two-stage training process--comprising post-training adaptation followed by instruction tuning--we adapt MLLMs as retrievers using a simple one-tower architecture. Our approach outperforms conventional methods across diverse retrieval scenarios, particularly when processing complex multi-modal inputs. Notably, the joint fusion encoder yields greater improvements on tasks that require modality fusion compared to those that do not, underscoring the transformative potential of early integration strategies and pointing toward a promising direction for contextually aware and effective information retrieval.