๐ค AI Summary
Existing video multimodal large language models (MLLMs) commonly adopt uniform frame sampling and image-level encoding, leading to high computational redundancy and weak motion perception. To address this, we propose an efficient video MLLM framework leveraging compressed video GOP (Group of Pictures) structure: we introduce the first motion-aware GOP encoder and a slow-fast dual-stream architecture operating directly in the compression domain, enabling joint modeling of RGB frames and motion vectors (MVs). Furthermore, we design a GOP-level spatio-temporal fusion encoder and a lightweight visual token generation mechanism. Our contributions include: (1) introducing MotionBenchโthe first benchmark explicitly designed for motion understanding evaluation; (2) achieving state-of-the-art performance on MotionBench and mainstream video question answering benchmarks; and (3) significantly reducing inference cost while demonstrating superior scalability for long-video understanding.
๐ Abstract
Most current video MLLMs rely on uniform frame sampling and image-level encoders, resulting in inefficient data processing and limited motion awareness. To address these challenges, we introduce EMA, an Efficient Motion-Aware video MLLM that utilizes compressed video structures as inputs. We propose a motion-aware GOP (Group of Pictures) encoder that fuses spatial and motion information within a GOP unit in the compressed video stream, generating compact, informative visual tokens. By integrating fewer but denser RGB frames with more but sparser motion vectors in this native slow-fast input architecture, our approach reduces redundancy and enhances motion representation. Additionally, we introduce MotionBench, a benchmark for evaluating motion understanding across four motion types: linear, curved, rotational, and contact-based. Experimental results show that EMA achieves state-of-the-art performance on both MotionBench and popular video question answering benchmarks, while reducing inference costs. Moreover, EMA demonstrates strong scalability, as evidenced by its competitive performance on long video understanding benchmarks.