Reasoning-Augmented Representations for Multimodal Retrieval

📅 2026-02-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of spurious matches in universal multimodal retrieval when handling queries requiring implicit reasoning—such as coreference resolution or compositional constraints—where reasoning and feature compression are often entangled. The authors propose a data-driven framework that leverages powerful vision-language models to explicitly surface latent semantics prior to retrieval: densely annotating images, resolving ambiguous references, and rewriting complex queries into concise, structured constraints to construct an enhanced training corpus. By externalizing the reasoning process and integrating it into data curation, this approach decouples reasoning from embedding learning, effectively mitigating distributional shifts. Evaluated on the M-BEIR benchmark, the method significantly outperforms strong baselines; ablation studies further demonstrate that corpus augmentation boosts performance on knowledge-intensive queries, while query rewriting proves critical for compositional requests.

Technology Category

Application Category

📝 Abstract
Universal Multimodal Retrieval (UMR) seeks any-to-any search across text and vision, yet modern embedding models remain brittle when queries require latent reasoning (e.g., resolving underspecified references or matching compositional constraints). We argue this brittleness is often data-induced: when images carry"silent"evidence and queries leave key semantics implicit, a single embedding pass must both reason and compress, encouraging spurious feature matching. We propose a data-centric framework that decouples these roles by externalizing reasoning before retrieval. Using a strong Vision--Language Model, we make implicit semantics explicit by densely captioning visual evidence in corpus entries, resolving ambiguous multimodal references in queries, and rewriting verbose instructions into concise retrieval constraints. Inference-time enhancement alone is insufficient; the retriever must be trained on these semantically dense representations to avoid distribution shift and fully exploit the added signal. Across M-BEIR, our reasoning-augmented training method yields consistent gains over strong baselines, with ablations showing that corpus enhancement chiefly benefits knowledge-intensive queries while query enhancement is critical for compositional modification requests. We publicly release our code at https://github.com/AugmentedRetrieval/ReasoningAugmentedRetrieval.
Problem

Research questions and friction points this paper is trying to address.

Multimodal Retrieval
Latent Reasoning
Implicit Semantics
Compositional Constraints
Underspecified References
Innovation

Methods, ideas, or system contributions that make the work stand out.

reasoning-augmented retrieval
multimodal retrieval
semantic densification
vision-language models
data-centric enhancement
J
Jianrui Zhang
University of Wisconsin-Madison
A
Anirudh Sundara Rajan
University of Wisconsin-Madison
B
Brandon Han
University of Wisconsin-Madison
S
Soochahn Lee
Kookmin University
S
Sukanta Ganguly
NetApp, Inc.
Yong Jae Lee
Yong Jae Lee
Professor of Computer Sciences, UW-Madison
Computer visionMachine learning