Enhancing Multimodal Retrieval via Complementary Information Extraction and Alignment

📅 2026-01-08
🏛️ Annual Meeting of the Association for Computational Linguistics
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing multimodal retrieval approaches primarily focus on semantic alignment between images and text, often overlooking the complementary information embedded within images. This work proposes CIEA, the first method to systematically model and leverage the complementary semantic discrepancies between modalities. By integrating a complementary information extractor with a unified latent space representation and employing a dual complementary contrastive loss, CIEA explicitly preserves inter-modal differences while maintaining semantic consistency. The end-to-end architecture significantly outperforms both sparse and dense retrieval baselines across multiple benchmarks. Ablation studies and case analyses further validate the effectiveness of the proposed approach in capturing and utilizing complementary semantics for improved retrieval performance.

Technology Category

Application Category

📝 Abstract
Multimodal retrieval has emerged as a promising yet challenging research direction in recent years. Most existing studies in multimodal retrieval focus on capturing information in multimodal data that is similar to their paired texts, but often ignores the complementary information contained in multimodal data. In this study, we propose CIEA, a novel multimodal retrieval approach that employs Complementary Information Extraction and Alignment, which transforms both text and images in documents into a unified latent space and features a complementary information extractor designed to identify and preserve differences in the image representations. We optimize CIEA using two complementary contrastive losses to ensure semantic integrity and effectively capture the complementary information contained in images. Extensive experiments demonstrate the effectiveness of CIEA, which achieves significant improvements over both divide-and-conquer models and universal dense retrieval models. We provide an ablation study, further discussions, and case studies to highlight the advancements achieved by CIEA. To promote further research in the community, we have released the source code at https://github.com/zengdlong/CIEA.
Problem

Research questions and friction points this paper is trying to address.

multimodal retrieval
complementary information
information alignment
image-text representation
semantic integrity
Innovation

Methods, ideas, or system contributions that make the work stand out.

complementary information
multimodal retrieval
contrastive learning
latent space alignment
information extraction
🔎 Similar Papers
No similar papers found.