Quantifying and Narrowing the Unknown: Interactive Text-to-Video Retrieval via Uncertainty Minimization

📅 2025-07-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Text-to-video retrieval (TVR) suffers from three inherent uncertainties: textual ambiguity, fuzzy text–video alignment, and low-quality video frames—collectively limiting retrieval accuracy. To address this, we propose the first uncertainty-minimization-based interactive TVR framework. Our method explicitly models and quantifies each uncertainty source: semantic entropy measures textual ambiguity; Jensen–Shannon divergence captures alignment uncertainty between text and video; and temporal quality-aware frame sampling evaluates frame fidelity. Leveraging these metrics, we generate principled, training-free clarification questions to iteratively refine user queries. Evaluated on MSR-VTT-1k, our framework achieves 69.2% Recall@1 after only 10 interaction rounds—outperforming all prior methods. This demonstrates the effectiveness and robustness of systematic uncertainty modeling and progressive uncertainty reduction in interactive TVR.

Technology Category

Application Category

📝 Abstract
Despite recent advances, Text-to-video retrieval (TVR) is still hindered by multiple inherent uncertainties, such as ambiguous textual queries, indistinct text-video mappings, and low-quality video frames. Although interactive systems have emerged to address these challenges by refining user intent through clarifying questions, current methods typically rely on heuristic or ad-hoc strategies without explicitly quantifying these uncertainties, limiting their effectiveness. Motivated by this gap, we propose UMIVR, an Uncertainty-Minimizing Interactive Text-to-Video Retrieval framework that explicitly quantifies three critical uncertainties-text ambiguity, mapping uncertainty, and frame uncertainty-via principled, training-free metrics: semantic entropy-based Text Ambiguity Score (TAS), Jensen-Shannon divergence-based Mapping Uncertainty Score (MUS), and a Temporal Quality-based Frame Sampler (TQFS). By adaptively generating targeted clarifying questions guided by these uncertainty measures, UMIVR iteratively refines user queries, significantly reducing retrieval ambiguity. Extensive experiments on multiple benchmarks validate UMIVR's effectiveness, achieving notable gains in Recall@1 (69.2% after 10 interactive rounds) on the MSR-VTT-1k dataset, thereby establishing an uncertainty-minimizing foundation for interactive TVR.
Problem

Research questions and friction points this paper is trying to address.

Quantifies uncertainties in text-to-video retrieval systems
Reduces ambiguity via adaptive clarifying questions
Improves retrieval accuracy by minimizing three key uncertainties
Innovation

Methods, ideas, or system contributions that make the work stand out.

Quantifies text ambiguity via semantic entropy
Measures mapping uncertainty using Jensen-Shannon divergence
Samples frames based on temporal quality metrics
🔎 Similar Papers
No similar papers found.
Bingqing Zhang
Bingqing Zhang
The University of Queensland, Australia
Zhuo Cao
Zhuo Cao
Forschungszentrum Jülich
Artificial IntelligenceAstrophysics
Heming Du
Heming Du
The University of Queensland
computer vision
Y
Yang Li
CSIRO Data61, Australia
X
Xue Li
The University of Queensland, Australia
J
Jiajun Liu
CSIRO Data61, Australia
S
Sen Wang
The University of Queensland, Australia