Cross-Entropy Is All You Need To Invert the Data Generating Process

📅 2024-10-29
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the lack of a unified theoretical explanation for the effectiveness of supervised learning, specifically investigating whether cross-entropy minimization can recover interpretable latent factors underlying the data-generating process. Method: The authors rigorously prove—under standard classification settings—that cross-entropy training implicitly inverts the data-generating process, enabling the learned representations to be linear transformations of the true latent factors. Their approach integrates theoretical analysis (a formal identifiability theorem), controlled synthetic experiments, evaluation on the DisLib benchmark, and linear probing on ImageNet. Contribution/Results: Across three distinct data regimes, experiments consistently demonstrate linear decodability of latent factors from classifier representations. This provides the first theoretically rigorous and empirically validated identifiability framework for supervised learning, unifying explanations for neural analogies, linear representation structure, and feature superposition phenomena.

Technology Category

Application Category

📝 Abstract
Supervised learning has become a cornerstone of modern machine learning, yet a comprehensive theory explaining its effectiveness remains elusive. Empirical phenomena, such as neural analogy-making and the linear representation hypothesis, suggest that supervised models can learn interpretable factors of variation in a linear fashion. Recent advances in self-supervised learning, particularly nonlinear Independent Component Analysis, have shown that these methods can recover latent structures by inverting the data generating process. We extend these identifiability results to parametric instance discrimination, then show how insights transfer to the ubiquitous setting of supervised learning with cross-entropy minimization. We prove that even in standard classification tasks, models learn representations of ground-truth factors of variation up to a linear transformation. We corroborate our theoretical contribution with a series of empirical studies. First, using simulated data matching our theoretical assumptions, we demonstrate successful disentanglement of latent factors. Second, we show that on DisLib, a widely-used disentanglement benchmark, simple classification tasks recover latent structures up to linear transformations. Finally, we reveal that models trained on ImageNet encode representations that permit linear decoding of proxy factors of variation. Together, our theoretical findings and experiments offer a compelling explanation for recent observations of linear representations, such as superposition in neural networks. This work takes a significant step toward a cohesive theory that accounts for the unreasonable effectiveness of supervised deep learning.
Problem

Research questions and friction points this paper is trying to address.

Inverting data generating process
Linear representation in supervised learning
Disentanglement of latent factors
Innovation

Methods, ideas, or system contributions that make the work stand out.

Cross-entropy minimizes data inversion
Linear transformation recovers latent structures
Supervised learning interprets variation factors
🔎 Similar Papers
No similar papers found.