Drainage: A Unifying Framework for Addressing Class Uncertainty

📅 2025-12-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Deep learning models exhibit limited robustness to label noise, class ambiguity, and out-of-distribution (OOD) sample detection. To address these challenges, we propose a unified framework that introduces a learnable “drain node” at the classification network’s output layer, providing a natural escape route for highly uncertain samples. This mechanism enables, for the first time, end-to-end joint modeling of instance-dependent and asymmetric label noise without auxiliary modules, while facilitating probabilistic mass redistribution. Our approach integrates standard cross-entropy loss with a dynamic uncertainty-aware mechanism, preserving full differentiability throughout training. Extensive experiments demonstrate up to 9% accuracy improvement under high label noise on CIFAR-10/100, state-of-the-art performance on real-world noisy benchmarks—including mini-WebVision and Clothing-1M—and significant gains in both noise robustness and OOD detection reliability.

Technology Category

Application Category

📝 Abstract
Modern deep learning faces significant challenges with noisy labels, class ambiguity, as well as the need to robustly reject out-of-distribution or corrupted samples. In this work, we propose a unified framework based on the concept of a"drainage node''which we add at the output of the network. The node serves to reallocate probability mass toward uncertainty, while preserving desirable properties such as end-to-end training and differentiability. This mechanism provides a natural escape route for highly ambiguous, anomalous, or noisy samples, particularly relevant for instance-dependent and asymmetric label noise. In systematic experiments involving the addition of varying proportions of instance-dependent noise or asymmetric noise to CIFAR-10/100 labels, our drainage formulation achieves an accuracy increase of up to 9% over existing approaches in the high-noise regime. Our results on real-world datasets, such as mini-WebVision, mini-ImageNet and Clothing-1M, match or surpass existing state-of-the-art methods. Qualitative analysis reveals a denoising effect, where the drainage neuron consistently absorbs corrupt, mislabeled, or outlier data, leading to more stable decision boundaries. Furthermore, our drainage formulation enables applications well beyond classification, with immediate benefits for web-scale, semi-supervised dataset cleaning, and open-set applications.
Problem

Research questions and friction points this paper is trying to address.

Addresses class uncertainty and noisy labels in deep learning
Handles instance-dependent and asymmetric label noise robustly
Enables robust rejection of out-of-distribution or corrupted samples
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adds drainage node to network output for uncertainty handling
Reallocates probability mass to ambiguous or noisy samples
Enables end-to-end training with denoising and outlier absorption
🔎 Similar Papers
No similar papers found.
Y
Yasser Taha
Centre for Artificial Intelligence in Public Health Research, Robert Koch Institute, 13353 Berlin, Germany
Grégoire Montavon
Grégoire Montavon
Professor, Charité / BIFOLD
Explainable AIMachine LearningData Science
N
Nils Körber
Centre for Artificial Intelligence in Public Health Research, Robert Koch Institute, 13353 Berlin, Germany