Gather and Trace: Rethinking Video TextVQA from an Instance-oriented Perspective

📅 2025-08-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing Video TextVQA methods suffer from two key limitations: (1) textual entity redundancy and (2) implicit spatiotemporal relation modeling—both hindering accuracy and efficiency. To address these, we propose the instance-oriented *Gather-and-Trace* framework, the first to reformulate Video TextVQA at the text-instance level. Our approach comprises two core modules: (i) an *Instance Aggregation Module* that explicitly fuses visual, layout, and textual features to consolidate contextual information; and (ii) an *Instance-Aware Trajectory Tracking Module* that models the dynamic spatiotemporal evolution of text instances. This design ensures explicit relational reasoning while maintaining computational efficiency. Evaluated on multiple public benchmarks, our method achieves a 3.86% absolute accuracy gain over prior work and operates at 10× the inference speed of state-of-the-art video foundation models—significantly advancing the accuracy-efficiency trade-off in Video TextVQA.

Technology Category

Application Category

📝 Abstract
Video text-based visual question answering (Video TextVQA) aims to answer questions by explicitly reading and reasoning about the text involved in a video. Most works in this field follow a frame-level framework which suffers from redundant text entities and implicit relation modeling, resulting in limitations in both accuracy and efficiency. In this paper, we rethink the Video TextVQA task from an instance-oriented perspective and propose a novel model termed GAT (Gather and Trace). First, to obtain accurate reading result for each video text instance, a context-aggregated instance gathering module is designed to integrate the visual appearance, layout characteristics, and textual contents of the related entities into a unified textual representation. Then, to capture dynamic evolution of text in the video flow, an instance-focused trajectory tracing module is utilized to establish spatio-temporal relationships between instances and infer the final answer. Extensive experiments on several public Video TextVQA datasets validate the effectiveness and generalization of our framework. GAT outperforms existing Video TextVQA methods, video-language pretraining methods, and video large language models in both accuracy and inference speed. Notably, GAT surpasses the previous state-of-the-art Video TextVQA methods by 3.86% in accuracy and achieves ten times of faster inference speed than video large language models. The source code is available at https://github.com/zhangyan-ucas/GAT.
Problem

Research questions and friction points this paper is trying to address.

Improves Video TextVQA accuracy and efficiency
Addresses redundant text entities in videos
Captures dynamic text evolution in videos
Innovation

Methods, ideas, or system contributions that make the work stand out.

Context-aggregated instance gathering module
Instance-focused trajectory tracing module
Unified textual representation integration
🔎 Similar Papers
No similar papers found.
Y
Yan Zhang
Institute of Information Engineering, Chinese Academy of Sciences; School of Cyber Security, University of Chinese Academy of Sciences
Gangyan Zeng
Gangyan Zeng
Nanjing University of Science and Technology
Computer VisionOCRMultimodal Learning
Daiqing Wu
Daiqing Wu
Institute of Information Engineering, CAS
Machine learning
Huawen Shen
Huawen Shen
Phd of Chinese Academy of Science
B
Binbin Li
Institute of Information Engineering, Chinese Academy of Sciences; School of Cyber Security, University of Chinese Academy of Sciences
Y
Yu Zhou
VCIP & TMCC & DISSec, College of Computer Science, Nankai University
Can Ma
Can Ma
Unknown affiliation
Xiaojun Bi
Xiaojun Bi
Department of Computer Science, Stony Brook University
Human Computer InteractionMobile User InterfacesText InputHuman Performance Models