🤖 AI Summary
This work addresses two key limitations in current self-supervised visual pre-training methods: the loss of fine-grained information in contrastive learning and attention drift in masked image modeling caused by random masking. To overcome these issues, the authors propose C2FMAE, a novel framework that leverages a three-level granularity hierarchy—scene semantics, object instances, and pixels—to construct a cascaded decoder that explicitly models cross-granularity dependencies. A progressive masking curriculum is introduced to establish a structured learning path from global semantics to local details. By integrating multi-granularity pseudo-labels within a masked autoencoding architecture, C2FMAE achieves consistent and significant performance gains across image classification, object detection, and semantic segmentation tasks, demonstrating the effectiveness and generalizability of hierarchical representation learning.
📝 Abstract
Self-supervised visual pre-training methods face an inherent tension: contrastive learning (CL) captures global semantics but loses fine-grained detail, while masked image modeling (MIM) preserves local textures but suffers from "attention drift" due to semantically-agnostic random masking. We propose C2FMAE, a coarse-to-fine masked autoencoder that resolves this tension by explicitly learning hierarchical visual representations across three data granularities: semantic masks (scene-level), instance masks (object-level), and RGB images (pixel-level). Two synergistic innovations enforce a strict top-down learning principle. First, a cascaded decoder sequentially reconstructs from scene semantics to object instances to pixel details, establishing explicit cross-granularity dependencies that parallel decoders cannot capture. Second, a progressive masking curriculum dynamically shifts the training focus from semantic-guided to instance-guided and finally to random masking, creating a structured learning path from global context to local features. To support this framework, we construct a large-scale multi-granular dataset with high-quality pseudo-labels for all 1.28M ImageNet-1K images. Extensive experiments show that C2FMAE achieves significant performance gains on image classification, object detection, and semantic segmentation, validating the effectiveness of our hierarchical design in learning more robust and generalizable representations.