X-CoT: Explainable Text-to-Video Retrieval via LLM-based Chain-of-Thought Reasoning

πŸ“… 2025-09-25
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Current text-to-video retrieval methods rely on embedding similarity (e.g., cosine distance), suffering from two key limitations: inability to discriminate low-quality text-video pairs and lack of interpretability in ranking outputs. To address these, we propose X-CoTβ€”the first framework to integrate large language models’ chain-of-thought (CoT) reasoning into text-to-video retrieval. X-CoT generates fine-grained, semantically coherent rationales for ranking decisions, replacing opaque similarity computations. Our method combines pairwise comparison with semantic-enhanced annotation to construct a debiased benchmark; it explicitly models matching logic via reasoning chains, jointly improving retrieval accuracy and interpretability. Experiments demonstrate that X-CoT significantly outperforms embedding-based baselines across multiple benchmarks. Moreover, it enables model behavior analysis and data quality diagnostics. The code and dataset are publicly released.

Technology Category

Application Category

πŸ“ Abstract
Prevalent text-to-video retrieval systems mainly adopt embedding models for feature extraction and compute cosine similarities for ranking. However, this design presents two limitations. Low-quality text-video data pairs could compromise the retrieval, yet are hard to identify and examine. Cosine similarity alone provides no explanation for the ranking results, limiting the interpretability. We ask that can we interpret the ranking results, so as to assess the retrieval models and examine the text-video data? This work proposes X-CoT, an explainable retrieval framework upon LLM CoT reasoning in place of the embedding model-based similarity ranking. We first expand the existing benchmarks with additional video annotations to support semantic understanding and reduce data bias. We also devise a retrieval CoT consisting of pairwise comparison steps, yielding detailed reasoning and complete ranking. X-CoT empirically improves the retrieval performance and produces detailed rationales. It also facilitates the model behavior and data quality analysis. Code and data are available at: https://github.com/PrasannaPulakurthi/X-CoT.
Problem

Research questions and friction points this paper is trying to address.

Interpreting video retrieval ranking results for transparency
Identifying low-quality text-video pairs affecting retrieval accuracy
Replacing embedding models with explainable reasoning frameworks
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM-based Chain-of-Thought reasoning replaces embedding models
Expands benchmarks with video annotations to reduce bias
Pairwise comparison steps produce detailed ranking rationales
πŸ”Ž Similar Papers
No similar papers found.