Robust Self-Training with Closed-loop Label Correction for Learning from Noisy Labels

📅 2026-03-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work proposes a self-training label correction framework based on decoupled bilevel optimization to address the performance degradation of deep neural networks under label noise. The method enables co-evolution of a classifier and a neural correction function, leveraging a small clean validation set, simulated noisy posterior estimation, and intermediate feature knowledge transfer to establish a closed-loop feedback mechanism that effectively mitigates error amplification. By design, the approach maintains theoretical stability while significantly improving the utilization efficiency of noisy samples and reducing computational overhead. Extensive experiments demonstrate state-of-the-art performance and training efficiency on benchmark datasets including CIFAR and Clothing1M.

Technology Category

Application Category

📝 Abstract
Training deep neural networks with noisy labels remains a significant challenge, often leading to degraded performance. Existing methods for handling label noise typically rely on either transition matrix, noise detection, or meta-learning techniques, but they often exhibit low utilization efficiency of noisy samples and incur high computational costs. In this paper, we propose a self-training label correction framework using decoupled bilevel optimization, where a classifier and neural correction function co-evolve. Leveraging a small clean dataset, our method employs noisy posterior simulation and intermediate features to transfer ground-truth knowledge, forming a closed-loop feedback system that prevents error amplification. Theoretical guarantees underpin the stability of our approach, and extensive experiments on benchmark datasets like CIFAR and Clothing1M confirm state-of-the-art performance with reduced training time, highlighting its practical applicability for learning from noisy labels.
Problem

Research questions and friction points this paper is trying to address.

noisy labels
deep neural networks
label noise
self-training
robust learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

self-training
label correction
bilevel optimization
noisy labels
closed-loop feedback