Q2E: Query-to-Event Decomposition for Zero-Shot Multilingual Text-to-Video Retrieval

📅 2025-06-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenges of recognizing complex real-world events and weak cross-modal alignment in zero-shot multilingual text-video retrieval, this paper proposes a Query-to-Event (Q2E) decomposition paradigm. It structurally refines concise user queries into fine-grained event semantics and implicitly distills latent event knowledge via large language models (LLMs) or vision-language models (VLMs). We design an entropy-driven multimodal fusion scoring mechanism and an event-graph-guided query decomposition strategy, enabling the first unified alignment of text, visual, and audio modalities at the event level. The method supports zero-shot transfer across datasets, languages, and models. Evaluated on two heterogeneous benchmarks, it consistently outperforms state-of-the-art methods; incorporating audio yields up to a 18.7% improvement in mean Average Precision (mAP). Code and data are publicly released.

Technology Category

Application Category

📝 Abstract
Recent approaches have shown impressive proficiency in extracting and leveraging parametric knowledge from Large-Language Models (LLMs) and Vision-Language Models (VLMs). In this work, we consider how we can improve the identification and retrieval of videos related to complex real-world events by automatically extracting latent parametric knowledge about those events. We present Q2E: a Query-to-Event decomposition method for zero-shot multilingual text-to-video retrieval, adaptable across datasets, domains, LLMs, or VLMs. Our approach demonstrates that we can enhance the understanding of otherwise overly simplified human queries by decomposing the query using the knowledge embedded in LLMs and VLMs. We additionally show how to apply our approach to both visual and speech-based inputs. To combine this varied multimodal knowledge, we adopt entropy-based fusion scoring for zero-shot fusion. Through evaluations on two diverse datasets and multiple retrieval metrics, we demonstrate that Q2E outperforms several state-of-the-art baselines. Our evaluation also shows that integrating audio information can significantly improve text-to-video retrieval. We have released code and data for future research.
Problem

Research questions and friction points this paper is trying to address.

Improving video retrieval for complex real-world events
Decomposing queries using LLM and VLM knowledge
Enhancing multilingual text-to-video retrieval with audio
Innovation

Methods, ideas, or system contributions that make the work stand out.

Query-to-Event decomposition for video retrieval
Leveraging LLMs and VLMs for query understanding
Entropy-based fusion for multimodal knowledge
🔎 Similar Papers
No similar papers found.