🤖 AI Summary
Long-video understanding faces two key bottlenecks: degraded spatial representations due to poor video–language alignment, and memory constraints limiting temporal extent. To address these, we propose a masked contrastive pretraining paradigm. Our approach is the first to empirically validate that high-ratio input masking (up to 75%) significantly alleviates GPU memory pressure while enhancing temporal modeling—enabling single-pass processing of videos up to 258 frames (~4.3 minutes) without backbone modification. Key technical components include multi-resolution tiling, factorized attention, and parameter-efficient image-to-video adaptation. Evaluated on long-horizon benchmarks including YouCook2 and EgoSchema, our method surpasses mainstream LLM-based segment-aggregation approaches. Furthermore, it scales effectively to a 1B-parameter model, demonstrating robustness and generalizability across architecture sizes.
📝 Abstract
Understanding long, real-world videos requires modeling of long-range visual dependencies. To this end, we explore video-first architectures, building on the common paradigm of transferring large-scale, image–text models to video via shallow temporal fusion. However, we expose two limitations to the approach: (1) decreased spatial capabilities, likely due to poor video–language alignment in standard video datasets, and (2) higher memory consumption, bottlenecking the number of frames that can be processed. To mitigate the memory bottleneck, we systematically analyze the memory/accuracy trade-off of various efficient methods: factorized attention, parameter-efficient image-to-video adaptation, input masking, and multi-resolution patchification. Surprisingly, simply masking large portions of the video (up to 75%) during contrastive pre-training proves to be one of the most robust ways to scale encoders to videos up to 4.3 minutes at 1 FPS. Our simple approach for training long video-to-text models, which scales to 1B parameters, does not add new architectural complexity and is able to outperform the popular paradigm of using much larger LLMs as an information aggregator over segment-based information on benchmarks with long-range temporal dependencies (YouCook2, EgoSchema).