🤖 AI Summary
Existing Video TextVQA methods suffer from two key limitations: (1) textual entity redundancy and (2) implicit spatiotemporal relation modeling—both hindering accuracy and efficiency. To address these, we propose the instance-oriented *Gather-and-Trace* framework, the first to reformulate Video TextVQA at the text-instance level. Our approach comprises two core modules: (i) an *Instance Aggregation Module* that explicitly fuses visual, layout, and textual features to consolidate contextual information; and (ii) an *Instance-Aware Trajectory Tracking Module* that models the dynamic spatiotemporal evolution of text instances. This design ensures explicit relational reasoning while maintaining computational efficiency. Evaluated on multiple public benchmarks, our method achieves a 3.86% absolute accuracy gain over prior work and operates at 10× the inference speed of state-of-the-art video foundation models—significantly advancing the accuracy-efficiency trade-off in Video TextVQA.
📝 Abstract
Video text-based visual question answering (Video TextVQA) aims to answer questions by explicitly reading and reasoning about the text involved in a video. Most works in this field follow a frame-level framework which suffers from redundant text entities and implicit relation modeling, resulting in limitations in both accuracy and efficiency. In this paper, we rethink the Video TextVQA task from an instance-oriented perspective and propose a novel model termed GAT (Gather and Trace). First, to obtain accurate reading result for each video text instance, a context-aggregated instance gathering module is designed to integrate the visual appearance, layout characteristics, and textual contents of the related entities into a unified textual representation. Then, to capture dynamic evolution of text in the video flow, an instance-focused trajectory tracing module is utilized to establish spatio-temporal relationships between instances and infer the final answer. Extensive experiments on several public Video TextVQA datasets validate the effectiveness and generalization of our framework. GAT outperforms existing Video TextVQA methods, video-language pretraining methods, and video large language models in both accuracy and inference speed. Notably, GAT surpasses the previous state-of-the-art Video TextVQA methods by 3.86% in accuracy and achieves ten times of faster inference speed than video large language models. The source code is available at https://github.com/zhangyan-ucas/GAT.