Bidirectional Likelihood Estimation with Multi-Modal Large Language Models for Text-Video Retrieval

๐Ÿ“… 2025-07-31
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
In text-to-video retrieval, candidate prior bias impedes performance: likelihood-based methods tend to favor videos with high prior probability rather than high semantic relevance. To address this, we propose BLiM, the first framework leveraging multimodal large language models (MLLMs) for bidirectional likelihood estimationโ€”both text-to-video and video-to-text. BLiM further introduces a training-free Candidate Prior Normalization (CPN) module that calibrates retrieval scores by explicitly modeling and mitigating candidate priors. By reformulating the scoring mechanism at the modeling level, BLiM inherently alleviates prior bias and enhances semantic alignment fidelity. Evaluated on four standard benchmarks, BLiM achieves an average +6.4 absolute gain in Recall@1 over state-of-the-art methods, demonstrating both effectiveness and strong cross-dataset generalization.

Technology Category

Application Category

๐Ÿ“ Abstract
Text-Video Retrieval aims to find the most relevant text (or video) candidate given a video (or text) query from large-scale online databases. Recent work leverages multi-modal large language models (MLLMs) to improve retrieval, especially for long or complex query-candidate pairs. However, we observe that the naive application of MLLMs, i.e., retrieval based on candidate likelihood, introduces candidate prior bias, favoring candidates with inherently higher priors over those more relevant to the query. To this end, we propose a novel retrieval framework, Bidirectional Likelihood Estimation with MLLM (BLiM), which leverages both query and candidate likelihoods by training the model to generate text from a given video as well as video features from a given text. Furthermore, we introduce Candidate Prior Normalization (CPN), a simple yet effective training-free score calibration module designed to mitigate candidate prior bias in candidate likelihood. On four Text-Video Retrieval benchmarks, our BLiM equipped with CPN outperforms previous state-of-the-art models by 6.4 R@1 on average, effectively alleviating candidate prior bias and emphasizing query-candidate relevance. Our in-depth analysis across various multi-modal tasks beyond retrieval highlights the broad applicability of CPN which enhances visual understanding by reducing reliance on textual priors. Code is available at https://github.com/mlvlab/BLiM.
Problem

Research questions and friction points this paper is trying to address.

Mitigating candidate prior bias in text-video retrieval
Improving relevance of query-candidate pairs using MLLMs
Enhancing visual understanding by reducing textual prior reliance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Bidirectional Likelihood Estimation with MLLM
Candidate Prior Normalization for bias mitigation
Training-free score calibration module CPN
๐Ÿ”Ž Similar Papers
No similar papers found.