From Semantics to Pixels: Coarse-to-Fine Masked Autoencoders for Hierarchical Visual Understanding

📅 2026-03-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses two key limitations in current self-supervised visual pre-training methods: the loss of fine-grained information in contrastive learning and attention drift in masked image modeling caused by random masking. To overcome these issues, the authors propose C2FMAE, a novel framework that leverages a three-level granularity hierarchy—scene semantics, object instances, and pixels—to construct a cascaded decoder that explicitly models cross-granularity dependencies. A progressive masking curriculum is introduced to establish a structured learning path from global semantics to local details. By integrating multi-granularity pseudo-labels within a masked autoencoding architecture, C2FMAE achieves consistent and significant performance gains across image classification, object detection, and semantic segmentation tasks, demonstrating the effectiveness and generalizability of hierarchical representation learning.

Technology Category

Application Category

📝 Abstract
Self-supervised visual pre-training methods face an inherent tension: contrastive learning (CL) captures global semantics but loses fine-grained detail, while masked image modeling (MIM) preserves local textures but suffers from "attention drift" due to semantically-agnostic random masking. We propose C2FMAE, a coarse-to-fine masked autoencoder that resolves this tension by explicitly learning hierarchical visual representations across three data granularities: semantic masks (scene-level), instance masks (object-level), and RGB images (pixel-level). Two synergistic innovations enforce a strict top-down learning principle. First, a cascaded decoder sequentially reconstructs from scene semantics to object instances to pixel details, establishing explicit cross-granularity dependencies that parallel decoders cannot capture. Second, a progressive masking curriculum dynamically shifts the training focus from semantic-guided to instance-guided and finally to random masking, creating a structured learning path from global context to local features. To support this framework, we construct a large-scale multi-granular dataset with high-quality pseudo-labels for all 1.28M ImageNet-1K images. Extensive experiments show that C2FMAE achieves significant performance gains on image classification, object detection, and semantic segmentation, validating the effectiveness of our hierarchical design in learning more robust and generalizable representations.
Problem

Research questions and friction points this paper is trying to address.

self-supervised learning
masked image modeling
contrastive learning
hierarchical visual representation
attention drift
Innovation

Methods, ideas, or system contributions that make the work stand out.

masked autoencoder
hierarchical representation
coarse-to-fine learning
progressive masking
self-supervised pre-training
🔎 Similar Papers
No similar papers found.
W
Wenzhao Xiang
Key Laboratory of Intelligent Information Processing, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China; University of Chinese Academy of Sciences, Beijing 100049, China; Pengcheng Laboratory, Shenzhen 518108, China
Y
Yue Wu
Key Laboratory of Intelligent Information Processing, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China; University of Chinese Academy of Sciences, Beijing 100049, China; Pengcheng Laboratory, Shenzhen 518108, China
Hongyang Yu
Hongyang Yu
Peng Cheng Laboratory
F
Feng Gao
School of Arts, Peking University, Beijing 100871, China
Fan Yang
Fan Yang
Peking University
Deep Learning、Computer Vision
Xilin Chen
Xilin Chen
Institute of Computing Technology, Chinese Academy of Sciences
Computer VisionPattern RecognitionMachine Learning