LV-MAE: Learning Long Video Representations through Masked-Embedding Autoencoders

📅 2025-04-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing long-video understanding methods struggle to jointly model short-term spatiotemporal patterns and long-range temporal dependencies, while being constrained by fixed frame counts and segment cropping. Method: We propose LV-MAE—the first end-to-end self-supervised masked embedding autoencoder framework for long videos. It decouples short-term spatiotemporal primitive modeling (via a multimodal short-segment encoder) from long-range inter-segment dependency modeling (via cross-segment attention), enabling direct processing of raw minute-scale videos (e.g., >20 minutes) without frame limits. Reconstruction supervision is grounded in video–text alignment, requiring no manual annotations. Contribution/Results: LV-MAE achieves state-of-the-art performance on three major long-video benchmarks—LVU, COIN, and Breakfast. It attains optimal downstream accuracy using only linear or attention-based probes. High-fidelity reconstruction is further validated through cross-modal retrieval visualization.

Technology Category

Application Category

📝 Abstract
In this work, we introduce long-video masked-embedding autoencoders (LV-MAE), a self-supervised learning framework for long video representation. Our approach treats short- and long-span dependencies as two separate tasks. Such decoupling allows for a more intuitive video processing where short-span spatiotemporal primitives are first encoded and are then used to capture long-range dependencies across consecutive video segments. To achieve this, we leverage advanced off-the-shelf multimodal encoders to extract representations from short segments within the long video, followed by pre-training a masked-embedding autoencoder capturing high-level interactions across segments. LV-MAE is highly efficient to train and enables the processing of much longer videos by alleviating the constraint on the number of input frames. Furthermore, unlike existing methods that typically pre-train on short-video datasets, our approach offers self-supervised pre-training using long video samples (e.g., 20+ minutes video clips) at scale. Using LV-MAE representations, we achieve state-of-the-art results on three long-video benchmarks -- LVU, COIN, and Breakfast -- employing only a simple classification head for either attentive or linear probing. Finally, to assess LV-MAE pre-training and visualize its reconstruction quality, we leverage the video-language aligned space of short video representations to monitor LV-MAE through video-text retrieval.
Problem

Research questions and friction points this paper is trying to address.

Develop self-supervised learning for long video representation
Decouple short- and long-span dependencies in video processing
Enable efficient training with longer video inputs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Decouples short- and long-span video dependencies
Uses masked-embedding autoencoder for segment interactions
Enables efficient long-video self-supervised pre-training
🔎 Similar Papers
No similar papers found.