SFA: Scan, Focus, and Amplify toward Guidance-aware Answering for Video TextVQA

📅 2025-11-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Video TextVQA requires precise localization, understanding, and fusion of scene text appearing across frames with varying scales, orientations, and clarity, while jointly modeling temporal and semantic context for accurate answer generation. To address this, we propose the first training-free, parameter-free Video-LLM framework, featuring a novel “Scan–Focus–Amplify” three-stage prompting mechanism: (i) adaptive scanning via frame-level text detection; (ii) question-guided visual focusing on salient regions; and (iii) amplification of relevant textual–visual cues through cross-modal signal enhancement. Our method requires no fine-tuning—only structured prompt engineering to steer input attention distribution. Evaluated on multiple public benchmarks, it achieves state-of-the-art performance, demonstrating substantial gains in both accuracy and cross-domain generalization.

Technology Category

Application Category

📝 Abstract
Video text-based visual question answering (Video TextVQA) task aims to answer questions about videos by leveraging the visual text appearing within the videos. This task poses significant challenges, requiring models to accurately perceive and comprehend scene text that varies in scale, orientation, and clarity across frames, while effectively integrating temporal and semantic context to generate precise answers. Moreover, the model must identify question-relevant textual cues and filter out redundant or irrelevant information to ensure answering is guided by the most relevant and informative cues. To address these challenges, we propose SFA, a training-free framework and the first Video-LLM-based method tailored for Video TextVQA, motivated by the human process of answering questions. By adaptively scanning video frames, selectively focusing on key regions, and directly amplifying them, SFA effectively guides the Video-LLM's attention toward essential cues, enabling it to generate more accurate answers. SFA achieves new state-of-the-art results across several public Video TextVQA datasets and surpasses previous methods by a substantial margin, demonstrating its effectiveness and generalizability.
Problem

Research questions and friction points this paper is trying to address.

Addressing video text-based visual question answering challenges with varying text scales and orientations
Integrating temporal and semantic context across video frames for accurate responses
Identifying question-relevant textual cues while filtering redundant visual information
Innovation

Methods, ideas, or system contributions that make the work stand out.

Scan video frames adaptively for text detection
Focus on key regions with relevant textual cues
Amplify essential cues to guide Video-LLM attention
🔎 Similar Papers
No similar papers found.
H
Haibin He
School of Computer Science, National Engineering Research Center for Multimedia Software, Institute of Artificial Intelligence, and Hubei Key Laboratory of Multimedia and Network Communication Engineering, Wuhan University, Wuhan, Hubei, China
Qihuang Zhong
Qihuang Zhong
Wuhan University
Large Language ModelsNatural Language Processing
J
Juhua Liu
School of Computer Science, National Engineering Research Center for Multimedia Software, Institute of Artificial Intelligence, and Hubei Key Laboratory of Multimedia and Network Communication Engineering, Wuhan University, Wuhan, Hubei, China
Bo Du
Bo Du
Department of Management, Griffith Business School
Sustainable TransportTravel BehaviourUrban Data AnalyticsLogistics and Supply Chain
P
Peng Wang
Department of Computing and Mathematics, Manchester Metropolitan University, Manchester, UK
J
Jing Zhang
School of Computer Science, National Engineering Research Center for Multimedia Software, Institute of Artificial Intelligence, and Hubei Key Laboratory of Multimedia and Network Communication Engineering, Wuhan University, Wuhan, Hubei, China