Perception Encoder: The best visual embeddings are not at the output of the network

📅 2025-04-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the limitations of vision encoders that overly rely on final-layer representations and struggle to support both generative and perceptual downstream tasks, this paper introduces the Perception Encoder (PE): a mid-layer visual encoder trained exclusively via contrastive vision-language pretraining. PE is the first to identify and leverage intermediate-layer features as universal visual embeddings, innovatively integrating dual alignment mechanisms—language alignment (enabling image-text matching and multimodal question answering) and spatial alignment (supporting dense prediction tasks such as object detection, depth estimation, and single-object tracking). The method combines synthetic and human-annotated video data, mid-layer feature extraction, and end-to-end contrastive learning. PE achieves state-of-the-art performance across zero-shot image/video classification and retrieval, document/image/video question answering, object detection, depth estimation, and single-object tracking. The model, training code, and a novel video dataset are publicly released.

Technology Category

Application Category

📝 Abstract
We introduce Perception Encoder (PE), a state-of-the-art encoder for image and video understanding trained via simple vision-language learning. Traditionally, vision encoders have relied on a variety of pretraining objectives, each tailored to specific downstream tasks such as classification, captioning, or localization. Surprisingly, after scaling our carefully tuned image pretraining recipe and refining with our robust video data engine, we find that contrastive vision-language training alone can produce strong, general embeddings for all of these downstream tasks. There is only one caveat: these embeddings are hidden within the intermediate layers of the network. To draw them out, we introduce two alignment methods, language alignment for multimodal language modeling, and spatial alignment for dense prediction. Together with the core contrastive checkpoint, our PE family of models achieves state-of-the-art performance on a wide variety of tasks, including zero-shot image and video classification and retrieval; document, image, and video Q&A; and spatial tasks such as detection, depth estimation, and tracking. To foster further research, we are releasing our models, code, and a novel dataset of synthetically and human-annotated videos.
Problem

Research questions and friction points this paper is trying to address.

Finding optimal visual embeddings in intermediate network layers
Improving image and video understanding via contrastive training
Enhancing multimodal alignment for diverse downstream tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Contrastive vision-language training for embeddings
Intermediate layer embeddings extraction via alignment
State-of-the-art multimodal task performance
🔎 Similar Papers
No similar papers found.