🤖 AI Summary
Long-form video visual question answering (VQA) faces two core challenges: excessive frame redundancy and difficulty modeling long-range spatiotemporal dependencies; existing compression methods often discard critical events or rapid spatiotemporal patterns. To address this, we propose a Selective State Space Model (SSM) that introduces selective scanning—a mechanism previously unexplored in video compression—to enable content-aware, dynamic keyframe selection and adaptive spatiotemporal feature compression, yielding compact token sequences suitable for efficient large language model (LLM) processing. We further design a multi-stage token refinement module to enhance semantic fidelity. Our method achieves state-of-the-art performance across six long-video VQA benchmarks—including PerceptionTest, NExT-QA, and EgoSchema—while significantly improving both inference efficiency and fine-grained event localization capability.
📝 Abstract
Video Question Answering (VQA) in long videos poses the key challenge of extracting relevant information and modeling long-range dependencies from many redundant frames. The self-attention mechanism provides a general solution for sequence modeling, but it has a prohibitive cost when applied to a massive number of spatiotemporal tokens in long videos. Most prior methods rely on compression strategies to lower the computational cost, such as reducing the input length via sparse frame sampling or compressing the output sequence passed to the large language model (LLM) via space-time pooling. However, these naive approaches over-represent redundant information and often miss salient events or fast-occurring space-time patterns. In this work, we introduce BIMBA, an efficient state-space model to handle long-form videos. Our model leverages the selective scan algorithm to learn to effectively select critical information from high-dimensional video and transform it into a reduced token sequence for efficient LLM processing. Extensive experiments demonstrate that BIMBA achieves state-of-the-art accuracy on multiple long-form VQA benchmarks, including PerceptionTest, NExT-QA, EgoSchema, VNBench, LongVideoBench, and Video-MME. Code, and models are publicly available at https://sites.google.com/view/bimba-mllm.