IRNet: Iterative Refinement Network for Noisy Partial Label Learning

📅 2022-11-09
📈 Citations: 4
Influential: 2
📄 PDF
🤖 AI Summary
This work studies noisy partial-label learning (noisy PLL), a more realistic weakly supervised setting where candidate label sets may exclude the ground-truth label—relaxing the strong assumption in conventional PLL that the true label must reside in the candidate set. To address this challenge, we propose IRNet, an iterative refinement network featuring two synergistic modules: noise sample detection and dynamic label correction, which jointly purify training data over iterations. We formally define the noisy PLL problem for the first time and theoretically prove that IRNet’s iterative framework converges to the Bayes-optimal classifier. To enhance robustness, we incorporate warm-start training, consistency regularization, and data augmentation. Extensive experiments on multiple benchmark datasets demonstrate that IRNet significantly outperforms existing state-of-the-art methods. Both theoretical analysis and empirical results validate its effectiveness in denoising and generalization.
📝 Abstract
Partial label learning (PLL) is a typical weakly supervised learning, where each sample is associated with a set of candidate labels. The basic assumption of PLL is that the ground-truth label must reside in the candidate set. However, this assumption may not be satisfied due to the unprofessional judgment of the annotators, thus limiting the practical application of PLL. In this paper, we relax this assumption and focus on a more general problem, noisy PLL, where the ground-truth label may not exist in the candidate set. To address this challenging problem, we propose a novel framework called"Iterative Refinement Network (IRNet)". It aims to purify the noisy samples by two key modules, i.e., noisy sample detection and label correction. Ideally, we can convert noisy PLL into traditional PLL if all noisy samples are corrected. To guarantee the performance of these modules, we start with warm-up training and exploit data augmentation to reduce prediction errors. Through theoretical analysis, we prove that IRNet is able to reduce the noise level of the dataset and eventually approximate the Bayes optimal classifier. Experimental results on multiple benchmark datasets demonstrate the effectiveness of our method. IRNet is superior to existing state-of-the-art approaches on noisy PLL.
Problem

Research questions and friction points this paper is trying to address.

Addresses noisy partial label learning with missing ground-truth labels
Purifies noisy samples via detection and correction modules
Improves performance using smoothness constraints and plug-in strategy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Iterative refinement network purifies noisy partial labels
Detects noisy samples and corrects labels iteratively
Uses smoothness constraints to reduce prediction errors
🔎 Similar Papers
No similar papers found.
Zheng Lian
Zheng Lian
Associate Professor, IEEE/CCF Senior Member, Institute of Automation, Chinese Academy of Sciences
Affective ComputingSentiment AnalysisMachine Learning
M
Ming Xu
seed group of ByteDance, China
L
Lang Chen
National Key Laboratory for Multi-modal Artificial Intelligence Systems, Institute of Automation, Chinese Academy of Sciences, Beijing, China
Licai Sun
Licai Sun
University of Oulu
Affective computingDeep learningMachine learning
B
B. Liu
National Key Laboratory for Multi-modal Artificial Intelligence Systems, Institute of Automation, Chinese Academy of Sciences, Beijing, China
J
Jianhua Tao
Department of Automation, Tsinghua University, Beijing, China, and Beijing National Research Center for Information Science and Technology, Tsinghua University, Beijing, China