VRoPE: Rotary Position Embedding for Video Large Language Models

📅 2025-02-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the limitation of Rotary Position Embedding (RoPE) in Video Large Language Models (Video-LLMs) to effectively model spatiotemporal structure. Existing RoPE variants suffer from positional bias in attention distributions and structural discontinuity during video-to-text modality conversion. To resolve these issues, we propose Video-specific RoPE (VRoPE), a dedicated positional encoding scheme for video. VRoPE introduces three key innovations: (1) a 3D spatiotemporal extension of RoPE, (2) reparameterized position indexing to preserve spatial coherence across frames, and (3) an attention-balancing encoding strategy to mitigate positional bias. Extensive experiments on Vicuna- and Qwen2-based multi-scale Video-LLMs demonstrate that VRoPE consistently outperforms RoPE-3D and other baselines, yielding substantial gains in video understanding, temporal reasoning, and cross-modal retrieval. To our knowledge, VRoPE is the first RoPE variant explicitly optimized for video’s intrinsic spatiotemporal structure, establishing a new paradigm for video–language joint modeling.

Technology Category

Application Category

📝 Abstract
Rotary Position Embedding (RoPE) has shown strong performance in text-based Large Language Models (LLMs), but extending it to video remains a challenge due to the intricate spatiotemporal structure of video frames. Existing adaptations, such as RoPE-3D, attempt to encode spatial and temporal dimensions separately but suffer from two major limitations: positional bias in attention distribution and disruptions in video-text transitions. To overcome these issues, we propose Video Rotary Position Embedding (VRoPE), a novel positional encoding method tailored for Video-LLMs. Our approach restructures positional indices to preserve spatial coherence and ensure a smooth transition between video and text tokens. Additionally, we introduce a more balanced encoding strategy that mitigates attention biases, ensuring a more uniform distribution of spatial focus. Extensive experiments on Vicuna and Qwen2 across different model scales demonstrate that VRoPE consistently outperforms previous RoPE variants, achieving significant improvements in video understanding, temporal reasoning, and retrieval tasks. Code will be available at https://github.com/johncaged/VRoPE
Problem

Research questions and friction points this paper is trying to address.

Extend Rotary Position Embedding to video.
Address positional bias in attention distribution.
Ensure smooth video-text token transitions.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Video Rotary Position Embedding
Balanced encoding strategy
Smooth video-text transitions
🔎 Similar Papers
No similar papers found.
Z
Zikang Liu
Institute of Automation, Chinese Academy of Sciences; School of Artificial Intelligence, University of Chinese Academy of Sciences
L
Longteng Guo
Institute of Automation, Chinese Academy of Sciences
Yepeng Tang
Yepeng Tang
Beijing Jiaotong University
VideoLLMVideo Understanding
J
Junxian Cai
Basic Algorithm Center, Tencent
K
Kai Ma
Basic Algorithm Center, Tencent
X
Xi Chen
Basic Algorithm Center, Tencent
J
Jing Liu
Institute of Automation, Chinese Academy of Sciences; School of Artificial Intelligence, University of Chinese Academy of Sciences