Pre-train to Gain: Robust Learning Without Clean Labels

📅 2025-11-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the problem of deep models’ susceptibility to overfitting and degraded generalization under label noise. We propose a robust training framework that requires no clean-label subset. Our method decouples representation learning from noise-sensitive supervised optimization by leveraging self-supervised pretraining—using approaches such as SimCLR and Barlow Twins—to initialize the feature extractor, thereby enhancing robustness without relying on any clean-data prior. Extensive experiments on CIFAR-10/100 under synthetic (symmetric and asymmetric) and real-world (Clothing1M) label noise demonstrate that our approach consistently outperforms ImageNet-pretrained baselines, with performance gains increasing as noise rates rise. Moreover, it improves downstream label error detection. This work establishes a novel paradigm for label-noise-robust learning under the stringent constraint of zero clean samples.

Technology Category

Application Category

📝 Abstract
Training deep networks with noisy labels leads to poor generalization and degraded accuracy due to overfitting to label noise. Existing approaches for learning with noisy labels often rely on the availability of a clean subset of data. By pre-training a feature extractor backbone without labels using self-supervised learning (SSL), followed by standard supervised training on the noisy dataset, we can train a more noise robust model without requiring a subset with clean labels. We evaluate the use of SimCLR and Barlow~Twins as SSL methods on CIFAR-10 and CIFAR-100 under synthetic and real world noise. Across all noise rates, self-supervised pre-training consistently improves classification accuracy and enhances downstream label-error detection (F1 and Balanced Accuracy). The performance gap widens as the noise rate increases, demonstrating improved robustness. Notably, our approach achieves comparable results to ImageNet pre-trained models at low noise levels, while substantially outperforming them under high noise conditions.
Problem

Research questions and friction points this paper is trying to address.

Training deep networks with noisy labels causes poor generalization and accuracy degradation
Existing methods require clean data subsets for robust learning with noisy labels
Self-supervised pre-training enables noise-robust models without needing clean labeled data
Innovation

Methods, ideas, or system contributions that make the work stand out.

Self-supervised pre-training enhances noise robustness
SSL methods like SimCLR replace clean label requirements
Pre-training improves accuracy and error detection performance
🔎 Similar Papers
No similar papers found.