Efficient Motion-Aware Video MLLM

๐Ÿ“… 2025-03-17
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Existing video multimodal large language models (MLLMs) commonly adopt uniform frame sampling and image-level encoding, leading to high computational redundancy and weak motion perception. To address this, we propose an efficient video MLLM framework leveraging compressed video GOP (Group of Pictures) structure: we introduce the first motion-aware GOP encoder and a slow-fast dual-stream architecture operating directly in the compression domain, enabling joint modeling of RGB frames and motion vectors (MVs). Furthermore, we design a GOP-level spatio-temporal fusion encoder and a lightweight visual token generation mechanism. Our contributions include: (1) introducing MotionBenchโ€”the first benchmark explicitly designed for motion understanding evaluation; (2) achieving state-of-the-art performance on MotionBench and mainstream video question answering benchmarks; and (3) significantly reducing inference cost while demonstrating superior scalability for long-video understanding.

Technology Category

Application Category

๐Ÿ“ Abstract
Most current video MLLMs rely on uniform frame sampling and image-level encoders, resulting in inefficient data processing and limited motion awareness. To address these challenges, we introduce EMA, an Efficient Motion-Aware video MLLM that utilizes compressed video structures as inputs. We propose a motion-aware GOP (Group of Pictures) encoder that fuses spatial and motion information within a GOP unit in the compressed video stream, generating compact, informative visual tokens. By integrating fewer but denser RGB frames with more but sparser motion vectors in this native slow-fast input architecture, our approach reduces redundancy and enhances motion representation. Additionally, we introduce MotionBench, a benchmark for evaluating motion understanding across four motion types: linear, curved, rotational, and contact-based. Experimental results show that EMA achieves state-of-the-art performance on both MotionBench and popular video question answering benchmarks, while reducing inference costs. Moreover, EMA demonstrates strong scalability, as evidenced by its competitive performance on long video understanding benchmarks.
Problem

Research questions and friction points this paper is trying to address.

Inefficient data processing in video MLLMs
Limited motion awareness in current video MLLMs
Need for better motion representation in video analysis
Innovation

Methods, ideas, or system contributions that make the work stand out.

Utilizes compressed video structures for inputs
Introduces motion-aware GOP encoder for fusion
Reduces redundancy with slow-fast input architecture
๐Ÿ”Ž Similar Papers
No similar papers found.
Zijia Zhao
Zijia Zhao
Institute of Automation, Chinese Academy Sciences (CASIA)
Multimodal learning
Yuqi Huo
Yuqi Huo
Bytedance Inc.
multi-modal pretraining
Tongtian Yue
Tongtian Yue
Institute of Automation, Chinese Academy of Sciences
Multimodal PretrainVisual-Language
L
Longteng Guo
Institute of Automation, Chinese Academy of Sciences; School of Artificial Intelligence, University of Chinese Academy of Sciences
H
Haoyu Lu
Renmin University of China
Bingning Wang
Bingning Wang
Baichuan Inc.
NLPQuestion AnsweringLarge language model
W
Weipeng Chen
Baichuan Inc.
J
Jing Liu
Institute of Automation, Chinese Academy of Sciences; School of Artificial Intelligence, University of Chinese Academy of Sciences