VideoMAP: Toward Scalable Mamba-based Video Autoregressive Pretraining

📅 2025-03-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Mamba architectures face scalability limitations in video understanding due to overfitting. To address this, we propose VideoMAP—a novel hybrid Mamba-Transformer architecture featuring a pioneering 4:1 Mamba-to-Transformer ratio—coupled with a frame-level masked autoregressive pretraining paradigm. This design jointly mitigates overfitting, enhances long-sequence modeling capacity, and improves sample efficiency. VideoMAP synergistically integrates Mamba’s efficient state-space modeling with Transformer’s capability to capture global temporal dependencies, while being explicitly adapted to vision encoders for seamless integration with multimodal large language models. Extensive experiments demonstrate that VideoMAP achieves state-of-the-art performance on Kinetics-400, Something-Something V2, Breakfast, and COIN, significantly reduces memory consumption, and supports substantially longer video inputs. The implementation is publicly available.

Technology Category

Application Category

📝 Abstract
Recent Mamba-based architectures for video understanding demonstrate promising computational efficiency and competitive performance, yet struggle with overfitting issues that hinder their scalability. To overcome this challenge, we introduce VideoMAP, a Hybrid Mamba-Transformer framework featuring a novel pre-training approach. VideoMAP uses a 4:1 Mamba-to-Transformer ratio, effectively balancing computational cost and model capacity. This architecture, combined with our proposed frame-wise masked autoregressive pre-training strategy, delivers significant performance gains when scaling to larger models. Additionally, VideoMAP exhibits impressive sample efficiency, significantly outperforming existing methods with less training data. Experiments show that VideoMAP outperforms existing models across various datasets, including Kinetics-400, Something-Something V2, Breakfast, and COIN. Furthermore, we demonstrate the potential of VideoMAP as a visual encoder for multimodal large language models, highlighting its ability to reduce memory usage and enable the processing of longer video sequences. The code is open-source at https://github.com/yunzeliu/MAP
Problem

Research questions and friction points this paper is trying to address.

Overcoming overfitting in Mamba-based video understanding models
Balancing computational cost and model capacity in video architectures
Enhancing sample efficiency and performance in video autoregressive pretraining
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hybrid Mamba-Transformer framework with 4:1 ratio
Frame-wise masked autoregressive pre-training strategy
Efficient visual encoder for multimodal large language models
🔎 Similar Papers
No similar papers found.
Yunze Liu
Yunze Liu
IIIS, Tsinghua University; Memories.ai Research
AI Memories3D Computer VisionEmbodied AIEgocentric Video
Peiran Wu
Peiran Wu
PhD Student, University of Bristol; Independent Reseacher
computer visionMLLMVideo
Cheng Liang
Cheng Liang
Shanghai AI Lab
VLM
J
Junxiao Shen
University of Bristol
L
Limin Wang
Nanjing University; Shanghai Artificial Intelligence Laboratory
L
Li Yi
IIIS, Tsinghua University; Shanghai Qi Zhi Institute; Shanghai Artificial Intelligence Laboratory