ACE: A Generative Cross-Modal Retrieval Framework with Coarse-To-Fine Semantic Modeling

📅 2024-06-25
🏛️ arXiv.org
📈 Citations: 9
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the high training cost and inference latency of conventional dual- or single-tower architectures in large-scale cross-modal retrieval. It pioneers the extension of generative retrieval to text→image/audio/video multimodal settings. The proposed Generative Cross-Modal Retrieval (GCMR) framework innovatively integrates K-Means–based coarse-grained clustering with Residual Quantized Variational Autoencoder (RQ-VAE)–based fine-grained quantization to construct a unified multimodal token space. Additionally, it introduces a coarse-to-fine cross-modal feature fusion strategy to bridge modality gaps. Evaluated on text-to-multimodal retrieval, GCMR achieves state-of-the-art performance, improving average Recall@1 by 15.27% over strong baselines. This demonstrates the effectiveness and scalability of end-to-end generative alignment—eliminating explicit similarity computation while enabling efficient, token-level cross-modal matching.

Technology Category

Application Category

📝 Abstract
Generative retrieval, which has demonstrated effectiveness in text-to-text retrieval, utilizes a sequence-to-sequence model to directly generate candidate identifiers based on natural language queries. Without explicitly computing the similarity between queries and candidates, generative retrieval surpasses dual-tower models in both speed and accuracy on large-scale corpora, providing new insights for cross-modal retrieval. However, constructing identifiers for multimodal data remains an untapped problem, and the modality gap between natural language queries and multimodal candidates hinders retrieval performance due to the absence of additional encoders. To this end, we propose a pioneering generAtive Cross-modal rEtrieval framework (ACE), which is a comprehensive framework for end-to-end cross-modal retrieval based on coarse-to-fine semantic modeling. We propose combining K-Means and RQ-VAE to construct coarse and fine tokens, serving as identifiers for multimodal data. Correspondingly, we design the coarse-to-fine feature fusion strategy to efficiently align natural language queries and candidate identifiers. ACE is the first work to comprehensively demonstrate the feasibility of generative approach on text-to-image/audio/video retrieval, challenging the dominance of the embedding-based dual-tower architecture. Extensive experiments show that ACE achieves state-of-the-art performance in cross-modal retrieval and outperforms the strong baselines on Recall@1 by 15.27% on average.
Problem

Research questions and friction points this paper is trying to address.

Improving cross-modal retrieval efficiency with generative models
Reducing training cost and inference latency in large-scale data
Enhancing semantic alignment between queries and candidates
Innovation

Methods, ideas, or system contributions that make the work stand out.

Generative cross-modal retrieval framework CART
Coarse-to-fine semantic modeling with K-Means and RQ-VAE
Feature fusion strategy for query-candidate alignment
🔎 Similar Papers
No similar papers found.
Minghui Fang
Minghui Fang
Zhejiang University
SpeechMulti-Modal LearningInformation Retrieval
S
Shengpeng Ji
Zhejiang University, Huawei Noah’s Ark Lab
J
Jia-li Zuo
Zhejiang University, Huawei Noah’s Ark Lab
H
Hai Huang
Zhejiang University, Huawei Noah’s Ark Lab
Y
Yan Xia
Zhejiang University, Huawei Noah’s Ark Lab
J
Jieming Zhu
Zhejiang University, Huawei Noah’s Ark Lab
X
Xize Cheng
Zhejiang University, Huawei Noah’s Ark Lab
X
Xiaoda Yang
Zhejiang University, Huawei Noah’s Ark Lab
Wenrui Liu
Wenrui Liu
Zhejiang University
time seriesmulti-modalLLM
G
Gang Wang
Zhejiang University, Huawei Noah’s Ark Lab
Zhenhua Dong
Zhenhua Dong
Noah's ark lab, Huawei Technologies Co., Ltd.
Recommender systemcausal inferencecountrfactual learningtrustworthy AImachine learning
Zhou Zhao
Zhou Zhao
Zhejiang University
Machine LearningData MiningMultimedia Computing