OmniRet: Efficient and High-Fidelity Omni Modality Retrieval

📅 2026-03-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitation of existing cross-modal retrieval models, which are predominantly confined to text–vision dual modalities and struggle to support composite queries involving text, vision, and audio. To this end, we propose OmniRet—the first efficient and high-fidelity tri-modal retrieval model capable of handling such composite queries. Our key innovations include an attention resampling mechanism that generates fixed-length compact representations for improved computational efficiency, and an attention-sliced Wasserstein pooling strategy designed to preserve fine-grained semantic information. We also introduce ACM, the first audio-centric multimodal benchmark. Built upon modality-specific encoders and a large language model architecture, OmniRet significantly outperforms current methods across 13 retrieval tasks and the MMEBv2 subset, demonstrating exceptional performance in composite querying, audio retrieval, and video retrieval, thereby validating its robust multimodal embedding capability.

Technology Category

Application Category

📝 Abstract
Multimodal retrieval is the task of aggregating information from queries across heterogeneous modalities to retrieve desired targets. State-of-the-art multimodal retrieval models can understand complex queries, yet they are typically limited to two modalities: text and vision. This limitation impedes the development of universal retrieval systems capable of comprehending queries that combine more than two modalities. To advance toward this goal, we present OmniRet, the first retrieval model capable of handling complex, composed queries spanning three key modalities: text, vision, and audio. Our OmniRet model addresses two critical challenges for universal retrieval: computational efficiency and representation fidelity. First, feeding massive token sequences from modality-specific encoders to Large Language Models (LLMs) is computationally inefficient. We therefore introduce an attention-based resampling mechanism to generate compact, fixed-size representations from these sequences. Second, compressing rich omni-modal data into a single embedding vector inevitably causes information loss and discards fine-grained details. We propose Attention Sliced Wasserstein Pooling to preserve these fine-grained details, leading to improved omni-modal representations. OmniRet is trained on an aggregation of approximately 6 million query-target pairs spanning 30 datasets. We benchmark our model on 13 retrieval tasks and a MMEBv2 subset. Our model demonstrates significant improvements on composed query, audio and video retrieval tasks, while achieving on-par performance with state-of-the-art models on others. Furthermore, we curate a new Audio-Centric Multimodal Benchmark (ACM). This new benchmark introduces two critical, previously missing tasks-composed audio retrieval and audio-visual retrieval to more comprehensively evaluate a model's omni-modal embedding capacity.
Problem

Research questions and friction points this paper is trying to address.

multimodal retrieval
omni-modality
composed queries
audio-visual retrieval
universal retrieval
Innovation

Methods, ideas, or system contributions that make the work stand out.

omni-modal retrieval
attention-based resampling
Attention Sliced Wasserstein Pooling
multimodal embedding
audio-centric benchmark
🔎 Similar Papers
No similar papers found.