TextVidBench: A Benchmark for Long Video Scene Text Understanding

📅 2025-06-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing TVQA datasets are constrained by short video durations (<3 minutes) and limited evaluation dimensions, hindering comprehensive assessment of multimodal large language models’ (MLLMs) long-video understanding capabilities. To address this, we introduce TextVidBench—the first benchmark for long videos (>3 minutes, average 2306 seconds) spanning nine cross-domain scenarios—and propose a three-stage evaluation framework: text-based “needle-in-a-haystack” retrieval, temporal localization, and dynamic textual description. We further design IT-Rope, a novel temporal enhancement mechanism integrating temporal prompt engineering and non-uniform positional encoding to overcome bottlenecks in long-horizon text–vision joint modeling. Experiments reveal that our approach effectively exposes critical weaknesses of mainstream MLLMs and achieves substantial improvements across multiple metrics on TextVidBench. TextVidBench establishes a reproducible baseline and provides a systematic optimization pathway for long-video textual understanding.

Technology Category

Application Category

📝 Abstract
Despite recent progress on the short-video Text-Visual Question Answering (ViteVQA) task - largely driven by benchmarks such as M4-ViteVQA - existing datasets still suffer from limited video duration and narrow evaluation scopes, making it difficult to adequately assess the growing capabilities of powerful multimodal large language models (MLLMs). To address these limitations, we introduce TextVidBench, the first benchmark specifically designed for long-video text question answering (>3 minutes). TextVidBench makes three key contributions: 1) Cross-domain long-video coverage: Spanning 9 categories (e.g., news, sports, gaming), with an average video length of 2306 seconds, enabling more realistic evaluation of long-video understanding. 2) A three-stage evaluation framework:"Text Needle-in-Haystack ->Temporal Grounding ->Text Dynamics Captioning". 3) High-quality fine-grained annotations: Containing over 5,000 question-answer pairs with detailed semantic labeling. Furthermore, we propose an efficient paradigm for improving large models through: (i) introducing the IT-Rope mechanism and temporal prompt engineering to enhance temporal perception, (ii) adopting non-uniform positional encoding to better handle long video sequences, and (iii) applying lightweight fine-tuning on video-text data. Extensive experiments on multiple public datasets as well as TextVidBench demonstrate that our new benchmark presents significant challenges to existing models, while our proposed method offers valuable insights into improving long-video scene text understanding capabilities.
Problem

Research questions and friction points this paper is trying to address.

Lack of benchmarks for long-video text understanding
Limited evaluation scopes in existing Text-VQA datasets
Challenges in assessing multimodal models for long videos
Innovation

Methods, ideas, or system contributions that make the work stand out.

IT-Rope mechanism enhances temporal perception
Non-uniform positional encoding for long videos
Lightweight fine-tuning on video-text data
🔎 Similar Papers
No similar papers found.
Y
Yangyang Zhong
Zhejiang University
J
Ji Qi
Tsinghua University
Y
Yuan Yao
Tsinghua University
P
Pengxin Luo
Zhejiang University
Y
Yunfeng Yan
Zhejiang University
Donglian Qi
Donglian Qi
Zhejiang University
Power systemsControl
Z
Zhiyuan Liu
Tsinghua University
T
Tat-Seng Chua
National University of Singapore