🤖 AI Summary
This work addresses the limitations of existing general-purpose multimodal retrieval models in effectively handling diverse user intents—ranging from simple keywords to complex compositional instructions—particularly when queries require logical reasoning. To this end, the authors propose an end-to-end framework that integrates generative reasoning with discriminative representation learning. The approach leverages a multimodal large language model to generate structured chains of thought (CoT) that explicitly parse query intent, which are then compressed into compact embeddings. A difficulty-aware routing mechanism dynamically decides whether to activate or bypass the reasoning module, thereby balancing accuracy and efficiency. Evaluated on the M-BEIR benchmark, the method achieves a new state of the art, significantly improving performance on complex query understanding, inference efficiency, and cross-domain zero-shot generalization.
📝 Abstract
Universal Multimodal Retrieval requires unified embedding models capable of interpreting diverse user intents, ranging from simple keywords to complex compositional instructions. While Multimodal Large Language Models (MLLMs) possess strong reasoning capabilities, prevailing adaptations confine them to static encoders, underutilizing their generative potential. This encoder-only paradigm struggles with complex intents that demand logical deduction rather than superficial pattern matching. To address this, we introduce TRACE (Task-adaptive Reasoning And Compressing Embeddings). TRACE unifies generative reasoning with discriminative representation learning. It first generates a structured Chain-of-Thought (CoT) to explicitly reason about the query, and subsequently compresses this reasoning trace into a compact embedding via a dedicated token. To train this framework, we construct M-BEIR-CoT, a large-scale dataset featuring a difficulty-aware routing strategy. Experiments on the M-BEIR benchmark establish TRACE as the new state-of-the-art. Crucially, TRACE demonstrates a learned implicit routing behavior. It autonomously activates reasoning for complex queries while bypassing it for simpler ones, achieving an optimal balance between retrieval accuracy and inference throughput. Furthermore, by internalizing the deductive process, TRACE exhibits remarkable zero-shot transferability to unseen domains and novel constraints.