Benchmarking noisy label detection methods

📅 2025-10-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Label noise in real-world data severely degrades model performance, yet existing detection methods lack standardized, comparable evaluation protocols. To address this, we propose a three-component decomposition framework—comprising label consistency measurement, aggregation strategy, and information source selection—and establish the first unified cross-modal (image/tabular) benchmark for label noise detection. We introduce false negative rate at a fixed operating point as a fair, comparable metric. Extensive experiments systematically evaluate combinations of in-sample vs. out-of-sample information, average probability vs. majority voting aggregation, and logit margin vs. softmax confidence consistency measures, on both synthetic and real-world noisy datasets. Results demonstrate that in-sample information combined with average probability aggregation and logit margin-based consistency achieves superior performance across most settings. This work establishes the first interpretable, scalable, and empirically grounded evaluation framework for label noise detection, providing actionable insights for method selection and design.

Technology Category

Application Category

📝 Abstract
Label noise is a common problem in real-world datasets, affecting both model training and validation. Clean data are essential for achieving strong performance and ensuring reliable evaluation. While various techniques have been proposed to detect noisy labels, there is no clear consensus on optimal approaches. We perform a comprehensive benchmark of detection methods by decomposing them into three fundamental components: label agreement function, aggregation method, and information gathering approach (in-sample vs out-of-sample). This decomposition can be applied to many existing detection methods, and enables systematic comparison across diverse approaches. To fairly compare methods, we propose a unified benchmark task, detecting a fraction of training samples equal to the dataset's noise rate. We also introduce a novel metric: the false negative rate at this fixed operating point. Our evaluation spans vision and tabular datasets under both synthetic and real-world noise conditions. We identify that in-sample information gathering using average probability aggregation combined with the logit margin as the label agreement function achieves the best results across most scenarios. Our findings provide practical guidance for designing new detection methods and selecting techniques for specific applications.
Problem

Research questions and friction points this paper is trying to address.

Benchmarking noisy label detection methods in datasets
Decomposing detection approaches into fundamental components
Evaluating methods across vision and tabular datasets
Innovation

Methods, ideas, or system contributions that make the work stand out.

Decomposing detection methods into three fundamental components
Proposing a unified benchmark task with fixed noise rate
Introducing a novel metric for false negative rate
🔎 Similar Papers
No similar papers found.
H
Henrique Pickler
Universidade Federal de Santa Catarina, R. Eng. Agronômico Andrei Cristian Ferreira, Florianópolis, 88040-900, Santa Catarina, Brazil
J
Jorge K. S. Kamassury
Universidade Federal de Santa Catarina, R. Eng. Agronômico Andrei Cristian Ferreira, Florianópolis, 88040-900, Santa Catarina, Brazil
Danilo Silva
Danilo Silva
Associate Professor, Federal University of Santa Catarina
Machine LearningDeep LearningInformation Theory