Cluster and Predict Latents Patches for Improved Masked Image Modeling

📅 2025-02-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current Masked Image Modeling (MIM) methods remain substantially inferior to state-of-the-art contrastive learning paradigms in self-supervised representation learning. To address this gap, we propose CAPI—a novel MIM framework that abandons conventional pixel- or token-level reconstruction objectives. Instead, CAPI introduces a latent-space clustering-based label prediction task: it applies K-means clustering to masked patch features extracted by ViT-L and supervises cluster label prediction via classification loss. This design incorporates a stable, scalable latent-variable clustering loss, significantly enhancing model generalization and training robustness. Under linear probing evaluation, CAPI achieves 83.8% top-1 accuracy on ImageNet and 32.1% mIoU on ADE20K semantic segmentation—substantially outperforming mainstream MIM approaches (e.g., MAE, SimMIM) and approaching the performance of DINOv2. CAPI thus establishes a new benchmark for the MIM paradigm.

Technology Category

Application Category

📝 Abstract
Masked Image Modeling (MIM) offers a promising approach to self-supervised representation learning, however existing MIM models still lag behind the state-of-the-art. In this paper, we systematically analyze target representations, loss functions, and architectures, to introduce CAPI - a novel pure-MIM framework that relies on the prediction of latent clusterings. Our approach leverages a clustering-based loss, which is stable to train, and exhibits promising scaling properties. Our ViT-L backbone, CAPI, achieves 83.8% accuracy on ImageNet and 32.1% mIoU on ADE20K with simple linear probes, substantially outperforming previous MIM methods and approaching the performance of the current state-of-the-art, DINOv2. We release all our code and models.
Problem

Research questions and friction points this paper is trying to address.

Improving Masked Image Modeling
Clustering-based loss functions
Self-supervised representation learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Clustering-based loss function
Latent cluster prediction
ViT-L backbone enhancement
🔎 Similar Papers
No similar papers found.