Ditch the Denoiser: Emergence of Noise Robustness in Self-Supervised Learning from Data Curriculum

📅 2025-05-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Self-supervised learning (SSL) exhibits poor robustness to noisy data, limiting its applicability in real-world domains such as astrophysics and medical imaging. To address this, we propose a noise-robust SSL framework that eliminates the need for a denoiser during inference. Our method first pretrains an SSL-based denoiser to construct a progressive “clean-to-noisy” data curriculum; subsequently, teacher–student embedding alignment regularization is employed to intrinsically learn noise-invariant representations. This is the first work to achieve denoiser-free robust SSL—decoupling downstream tasks from auxiliary denoising modules. Evaluated on ImageNet-1K under extreme Gaussian noise (σ = 255, SNR = 0.72 dB), our approach improves linear probe accuracy by 4.8% over baseline ViT-B and DINOv2 models, demonstrating the feasibility of learning noise-robust representations without requiring inference-time denoising.

Technology Category

Application Category

📝 Abstract
Self-Supervised Learning (SSL) has become a powerful solution to extract rich representations from unlabeled data. Yet, SSL research is mostly focused on clean, curated and high-quality datasets. As a result, applying SSL on noisy data remains a challenge, despite being crucial to applications such as astrophysics, medical imaging, geophysics or finance. In this work, we present a fully self-supervised framework that enables noise-robust representation learning without requiring a denoiser at inference or downstream fine-tuning. Our method first trains an SSL denoiser on noisy data, then uses it to construct a denoised-to-noisy data curriculum (i.e., training first on denoised, then noisy samples) for pretraining a SSL backbone (e.g., DINOv2), combined with a teacher-guided regularization that anchors noisy embeddings to their denoised counterparts. This process encourages the model to internalize noise robustness. Notably, the denoiser can be discarded after pretraining, simplifying deployment. On ImageNet-1k with ViT-B under extreme Gaussian noise ($sigma=255$, SNR = 0.72 dB), our method improves linear probing accuracy by 4.8% over DINOv2, demonstrating that denoiser-free robustness can emerge from noise-aware pretraining. The code is available at https://github.com/wenquanlu/noisy_dinov2.
Problem

Research questions and friction points this paper is trying to address.

Enables noise-robust SSL without inference denoiser
Trains SSL models on noisy data via curriculum
Improves accuracy on noisy datasets without fine-tuning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Self-supervised denoiser for noise-robust learning
Denoised-to-noisy data curriculum pretraining
Teacher-guided regularization for embedding stability
🔎 Similar Papers
No similar papers found.