🤖 AI Summary
Video TextVQA requires precise localization, understanding, and fusion of scene text appearing across frames with varying scales, orientations, and clarity, while jointly modeling temporal and semantic context for accurate answer generation. To address this, we propose the first training-free, parameter-free Video-LLM framework, featuring a novel “Scan–Focus–Amplify” three-stage prompting mechanism: (i) adaptive scanning via frame-level text detection; (ii) question-guided visual focusing on salient regions; and (iii) amplification of relevant textual–visual cues through cross-modal signal enhancement. Our method requires no fine-tuning—only structured prompt engineering to steer input attention distribution. Evaluated on multiple public benchmarks, it achieves state-of-the-art performance, demonstrating substantial gains in both accuracy and cross-domain generalization.
📝 Abstract
Video text-based visual question answering (Video TextVQA) task aims to answer questions about videos by leveraging the visual text appearing within the videos. This task poses significant challenges, requiring models to accurately perceive and comprehend scene text that varies in scale, orientation, and clarity across frames, while effectively integrating temporal and semantic context to generate precise answers. Moreover, the model must identify question-relevant textual cues and filter out redundant or irrelevant information to ensure answering is guided by the most relevant and informative cues. To address these challenges, we propose SFA, a training-free framework and the first Video-LLM-based method tailored for Video TextVQA, motivated by the human process of answering questions. By adaptively scanning video frames, selectively focusing on key regions, and directly amplifying them, SFA effectively guides the Video-LLM's attention toward essential cues, enabling it to generate more accurate answers. SFA achieves new state-of-the-art results across several public Video TextVQA datasets and surpasses previous methods by a substantial margin, demonstrating its effectiveness and generalizability.