The Effect of Label Noise on the Information Content of Neural Representations

๐Ÿ“… 2025-10-07
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This paper investigates the impact of label noise on the informational content of neural network hidden-layer representations. Addressing pervasive label corruption in supervised classification, we propose *Information Imbalance*โ€”an efficient proxy for conditional mutual informationโ€”to quantify how well hidden-layer representations capture label-relevant information, and integrate this metric with cross-entropy loss to analyze inter-layer information flow. Our experiments reveal three key findings: (i) hidden-layer representation informativeness exhibits a *double descent* trend with respect to model size; (ii) under over-parameterization, representations learned from noisy labels achieve discriminative power comparable to those trained on clean labels, exposing an intrinsic robustness mechanism; and (iii) representations from random-label training are inferior to random features, demonstrating that network weights actively adapt to the underlying label structure rather than merely memorizing arbitrary mappings. This work provides the first systematic information-theoretic characterization of representation robustness in over-parameterized regimes.

Technology Category

Application Category

๐Ÿ“ Abstract
In supervised classification tasks, models are trained to predict a label for each data point. In real-world datasets, these labels are often noisy due to annotation errors. While the impact of label noise on the performance of deep learning models has been widely studied, its effects on the networks' hidden representations remain poorly understood. We address this gap by systematically comparing hidden representations using the Information Imbalance, a computationally efficient proxy of conditional mutual information. Through this analysis, we observe that the information content of the hidden representations follows a double descent as a function of the number of network parameters, akin to the behavior of the test error. We further demonstrate that in the underparameterized regime, representations learned with noisy labels are more informative than those learned with clean labels, while in the overparameterized regime, these representations are equally informative. Our results indicate that the representations of overparameterized networks are robust to label noise. We also found that the information imbalance between the penultimate and pre-softmax layers decreases with cross-entropy loss in the overparameterized regime. This offers a new perspective on understanding generalization in classification tasks. Extending our analysis to representations learned from random labels, we show that these perform worse than random features. This indicates that training on random labels drives networks much beyond lazy learning, as weights adapt to encode labels information.
Problem

Research questions and friction points this paper is trying to address.

Studying label noise effects on neural representations' information content
Analyzing information imbalance in hidden layers under noisy labels
Comparing representation quality across underparameterized and overparameterized regimes
Innovation

Methods, ideas, or system contributions that make the work stand out.

Using Information Imbalance to analyze representations
Observing double descent in hidden layer information
Finding overparameterized networks robust to label noise
๐Ÿ”Ž Similar Papers
No similar papers found.