VidText: Towards Comprehensive Evaluation for Video Text Understanding

📅 2025-05-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing video understanding benchmarks neglect dynamic text–vision interactions, while OCR-focused benchmarks are restricted to static images—neither adequately evaluates joint comprehension of textual and visual context in videos. Method: We introduce VidTextBench, the first comprehensive multilingual benchmark for dynamic video text understanding, featuring a three-tier evaluation framework (video-level, segment-level, and instance-level) and a novel vision-language perception–cross-modal reasoning paired task. Contribution/Results: Systematic evaluation across 18 state-of-the-art large vision-language models reveals consistently weak performance across most tasks; key determinants include input resolution, OCR module capability, and inference strategies (e.g., chain-of-thought). VidTextBench fills a critical gap in dynamic multimodal text understanding evaluation, providing a reproducible benchmarking platform and concrete, actionable directions for model improvement.

Technology Category

Application Category

📝 Abstract
Visual texts embedded in videos carry rich semantic information, which is crucial for both holistic video understanding and fine-grained reasoning about local human actions. However, existing video understanding benchmarks largely overlook textual information, while OCR-specific benchmarks are constrained to static images, limiting their ability to capture the interaction between text and dynamic visual contexts. To address this gap, we propose VidText, a new benchmark designed for comprehensive and in-depth evaluation of video text understanding. VidText offers the following key features: 1) It covers a wide range of real-world scenarios and supports multilingual content, encompassing diverse settings where video text naturally appears. 2) It introduces a hierarchical evaluation framework with video-level, clip-level, and instance-level tasks, enabling assessment of both global summarization and local retrieval capabilities. 3) The benchmark also introduces a set of paired perception reasoning tasks, ranging from visual text perception to cross-modal reasoning between textual and visual information. Extensive experiments on 18 state-of-the-art Large Multimodal Models (LMMs) reveal that current models struggle across most tasks, with significant room for improvement. Further analysis highlights the impact of both model-intrinsic factors, such as input resolution and OCR capability, and external factors, including the use of auxiliary information and Chain-of-Thought reasoning strategies. We hope VidText will fill the current gap in video understanding benchmarks and serve as a foundation for future research on multimodal reasoning with video text in dynamic environments.
Problem

Research questions and friction points this paper is trying to address.

Lack of benchmarks for video text understanding in dynamic contexts
Need for hierarchical evaluation of video text comprehension
Current models struggle with multimodal reasoning in videos
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multilingual video text benchmark for diverse scenarios
Hierarchical evaluation framework for video understanding
Paired perception-reasoning tasks for cross-modal analysis
🔎 Similar Papers
No similar papers found.