Robust Classification with Noisy Labels Based on Posterior Maximization

📅 2025-04-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing deep learning models suffer from insufficient robustness to label noise, particularly symmetric label noise. Method: This paper focuses on the f-PML class of f-divergence objectives and provides the first theoretical proof that standard losses—such as cross-entropy—exhibit intrinsic robustness under symmetric label noise. We propose a dual-path mechanism: (i) *post-training posterior correction*, which rectifies predicted posteriors without requiring clean labels, and (ii) *test-time posterior refinement*, which dynamically calibrates confidence during inference. Grounded in f-divergence theory and explicit noise modeling, our framework unifies theoretical guarantees with practical efficacy. Contribution/Results: Our approach achieves state-of-the-art performance on standard noisy-label benchmarks. Theoretically, we identify the precise conditions under which robustness arises; empirically, it significantly improves generalization and prediction calibration.

Technology Category

Application Category

📝 Abstract
Designing objective functions robust to label noise is crucial for real-world classification algorithms. In this paper, we investigate the robustness to label noise of an $f$-divergence-based class of objective functions recently proposed for supervised classification, herein referred to as $f$-PML. We show that, in the presence of label noise, any of the $f$-PML objective functions can be corrected to obtain a neural network that is equal to the one learned with the clean dataset. Additionally, we propose an alternative and novel correction approach that, during the test phase, refines the posterior estimated by the neural network trained in the presence of label noise. Then, we demonstrate that, even if the considered $f$-PML objective functions are not symmetric, they are robust to symmetric label noise for any choice of $f$-divergence, without the need for any correction approach. This allows us to prove that the cross-entropy, which belongs to the $f$-PML class, is robust to symmetric label noise. Finally, we show that such a class of objective functions can be used together with refined training strategies, achieving competitive performance against state-of-the-art techniques of classification with label noise.
Problem

Research questions and friction points this paper is trying to address.

Correcting noisy label impact on neural networks
Enhancing robustness of classification with label noise
Improving posterior estimation in noisy label scenarios
Innovation

Methods, ideas, or system contributions that make the work stand out.

Corrects f-PML objectives for label noise
Refines posterior estimates during testing
Ensures robustness to symmetric label noise
🔎 Similar Papers
No similar papers found.