Object-level Self-Distillation for Vision Pretraining

📅 2025-06-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing image-level self-distillation methods implicitly assume a “single-object” setting, limiting their applicability to multi-object ImageNet images and hindering generalization to scene-centric real-world data. To address this, we propose Object-level Distillation for Self-supervision (ODIS), the first self-distillation framework operating at the object level. ODIS localizes semantically meaningful regions via object-aware cropping and employs mask-guided attention to focus on target objects, enabling object-level knowledge transfer within Vision Transformer (ViT) architectures. By decoupling distillation from the single-object constraint, ODIS supports pretraining on complex, multi-object scenes while jointly enhancing both image-level and patch-level representation learning. On ImageNet-1K, our ViT-Large variant achieves an 82.6% k-NN classification accuracy—significantly outperforming image-level self-distillation baselines.

Technology Category

Application Category

📝 Abstract
State-of-the-art vision pretraining methods rely on image-level self-distillation from object-centric datasets such as ImageNet, implicitly assuming each image contains a single object. This assumption does not always hold: many ImageNet images already contain multiple objects. Further, it limits scalability to scene-centric datasets that better mirror real-world complexity. We address these challenges by introducing Object-level Self-DIStillation (ODIS), a pretraining approach that shifts the self-distillation granularity from whole images to individual objects. Using object-aware cropping and masked attention, ODIS isolates object-specific regions, guiding the transformer toward semantically meaningful content and transforming a noisy, scene-level task into simpler object-level sub-tasks. We show that this approach improves visual representations both at the image and patch levels. Using masks at inference time, our method achieves an impressive $82.6%$ $k$-NN accuracy on ImageNet1k with ViT-Large.
Problem

Research questions and friction points this paper is trying to address.

Shifts self-distillation from images to individual objects
Improves visual representations at image and patch levels
Enhances scalability for scene-centric datasets
Innovation

Methods, ideas, or system contributions that make the work stand out.

Object-level self-distillation for pretraining
Object-aware cropping and masked attention
Transforms scene-level to object-level tasks
🔎 Similar Papers
No similar papers found.