🤖 AI Summary
To address the challenge of balancing generalization and task adaptability in existing video encoders, this paper introduces VideoPrism—a universal frozen video encoder that unifies diverse video understanding tasks without fine-tuning. Methodologically, VideoPrism innovatively integrates global-local semantic distillation with token shuffling to enhance video modality modeling and efficiently leverage auxiliary text (e.g., ASR captions). Its pretraining combines heterogeneous data masked autoencoding, cross-modal embedding distillation, and video-text contrastive learning. Evaluated on 33 mainstream benchmarks, VideoPrism achieves state-of-the-art performance on 31 tasks across four major categories—including web video question answering and scientific visual understanding—demonstrating substantial improvements in zero-shot and few-shot video understanding as well as cross-task generalization.
📝 Abstract
We introduce VideoPrism, a general-purpose video encoder that tackles diverse video understanding tasks with a single frozen model. We pretrain VideoPrism on a heterogeneous corpus containing 36M high-quality video-caption pairs and 582M video clips with noisy parallel text (e.g., ASR transcripts). The pretraining approach improves upon masked autoencoding by global-local distillation of semantic video embeddings and a token shuffling scheme, enabling VideoPrism to focus primarily on the video modality while leveraging the invaluable text associated with videos. We extensively test VideoPrism on four broad groups of video understanding tasks, from web video question answering to CV for science, achieving state-of-the-art performance on 31 out of 33 video understanding benchmarks.