🤖 AI Summary
This work addresses the challenge of ineffective supervision in keyframe selection for video question answering, where information redundancy and high inference costs often hinder the focus on question-relevant content. To this end, we propose a question-aware keyframe selection framework that leverages pseudo-labels generated by large vision-language models as synthetic supervision signals and introduces a coverage regularization mechanism to encourage temporally diverse and complementary evidence selection. This approach enables learnable, question-guided keyframe selection, significantly improving overall accuracy on the NExT-QA benchmark—particularly excelling on temporal and causal reasoning questions—and thereby demonstrates the effectiveness and novelty of treating keyframe selection as a learnable module.
📝 Abstract
Large multimodal models (LMMs) have recently demonstrated remarkable performance in video question answering (VideoQA), yet reasoning over video remains challenging due to high inference cost and diluted information. Keyframe selection offers efficiency and sharper reasoning but suffers from sparse supervision and redundant frame choices when relying only on image-text similarity. We present a question-aware keyframe selection framework with two components: pseudo keyframe labels derived from LMMs that provide informative supervision and a coverage regularization that promotes diverse, complementary evidence across time. Experiments on NExT-QA show that our method significantly improves accuracy, especially for temporal and causal question types, establishing keyframe selection as an effective and learnable module for VideoQA.