Too Many Frames, not all Useful: Efficient Strategies for Long-Form Video QA

📅 2024-06-13
🏛️ arXiv.org
📈 Citations: 12
Influential: 2
📄 PDF
🤖 AI Summary
Long-form video question answering (LVQA) suffers from high visual redundancy and sparse salient information, while existing methods inefficiently process uniformly sampled frames via independent vision-language model (VLM) descriptions, leading to poor semantic utilization. To address this, we propose the Hierarchical Keyframe Selector (HKFS), the first framework to jointly perform question-guided dynamic temporal segment localization and semantic keyframe selection. HKFS integrates multi-granularity temporal modeling, question-driven visual attention, and a lightweight VLM adaptation architecture—LVNet—to substantially reduce visual-language modeling overhead. Our approach achieves state-of-the-art performance on three major LVQA benchmarks—EgoSchema, NExT-QA, and IntentQA—and demonstrates strong generalization on VideoMME. Notably, it supports LVQA over videos up to one hour in length, enabling scalable, efficient, and semantically grounded long-video understanding.

Technology Category

Application Category

📝 Abstract
Long-form videos that span across wide temporal intervals are highly information redundant and contain multiple distinct events or entities that are often loosely related. Therefore, when performing long-form video question answering (LVQA), all information necessary to generate a correct response can often be contained within a small subset of frames. Recent literature explore use of large language models (LLMs) in LVQA benchmarks, achieving exceptional performance, while relying on vision language models (VLMs) to convert all visual content within videos into natural language. Such VLMs often independently caption a large number of frames uniformly sampled from long videos, which is not efficient and can mostly be redundant. Questioning these decision choices, we explore optimal strategies for key-frame selection that can significantly reduce these redundancies, namely Hierarchical Keyframe Selector. Our proposed framework, LVNet, achieves state-of-the-art performance at a comparable caption scale across three benchmark LVQA datasets: EgoSchema, NExT-QA, and IntentQA, while also demonstrating a strong performance on videos up to an hour long in VideoMME. Our code will be released publicly. The code can be found at https://github.com/jongwoopark7978/LVNet.
Problem

Research questions and friction points this paper is trying to address.

Efficient key-frame selection for long-form video QA
Reducing redundancy in video frame processing
Optimizing vision-language models for long videos
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hierarchical Keyframe Selector reduces redundancy
LVNet achieves state-of-the-art LVQA performance
Efficient captioning for long-form videos
🔎 Similar Papers
No similar papers found.
J
Jong Sung Park
Stony Brook University
Kanchana Ranasinghe
Kanchana Ranasinghe
PhD Student, Stony Brook University
Computer VisionDeep Learning
Kumara Kahatapitiya
Kumara Kahatapitiya
Research Scientist, Meta
Computer VisionMachine Learning
W
Wonjeong Ryoo
D
Donghyun Kim
Korea University
M
M. Ryoo
Stony Brook University