🤖 AI Summary
Label noise in real-world data severely degrades model performance, yet existing detection methods lack standardized, comparable evaluation protocols. To address this, we propose a three-component decomposition framework—comprising label consistency measurement, aggregation strategy, and information source selection—and establish the first unified cross-modal (image/tabular) benchmark for label noise detection. We introduce false negative rate at a fixed operating point as a fair, comparable metric. Extensive experiments systematically evaluate combinations of in-sample vs. out-of-sample information, average probability vs. majority voting aggregation, and logit margin vs. softmax confidence consistency measures, on both synthetic and real-world noisy datasets. Results demonstrate that in-sample information combined with average probability aggregation and logit margin-based consistency achieves superior performance across most settings. This work establishes the first interpretable, scalable, and empirically grounded evaluation framework for label noise detection, providing actionable insights for method selection and design.
📝 Abstract
Label noise is a common problem in real-world datasets, affecting both model training and validation. Clean data are essential for achieving strong performance and ensuring reliable evaluation. While various techniques have been proposed to detect noisy labels, there is no clear consensus on optimal approaches. We perform a comprehensive benchmark of detection methods by decomposing them into three fundamental components: label agreement function, aggregation method, and information gathering approach (in-sample vs out-of-sample). This decomposition can be applied to many existing detection methods, and enables systematic comparison across diverse approaches. To fairly compare methods, we propose a unified benchmark task, detecting a fraction of training samples equal to the dataset's noise rate. We also introduce a novel metric: the false negative rate at this fixed operating point. Our evaluation spans vision and tabular datasets under both synthetic and real-world noise conditions. We identify that in-sample information gathering using average probability aggregation combined with the logit margin as the label agreement function achieves the best results across most scenarios. Our findings provide practical guidance for designing new detection methods and selecting techniques for specific applications.