A solvable high-dimensional model where nonlinear autoencoders learn structure invisible to PCA while test loss misaligns with generalization

📅 2026-02-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of uncovering hidden structures in high-dimensional data that manifest only in higher-order statistics and remain invisible to second-order methods like PCA. The authors construct an analytically tractable high-dimensional two-factor latent variable model, wherein one latent factor is detectable solely through higher-order moments. Within this rigorously solvable framework, they establish—for the first time—that nonlinear autoencoders can successfully recover such latent structure missed by linear approaches. Both theoretical analysis and empirical experiments demonstrate that, despite exhibiting higher reconstruction loss than linear models, the representations learned by nonlinear autoencoders are more informative, thereby revealing a potential misalignment between reconstruction error and representation quality.

Technology Category

Application Category

📝 Abstract
Many real-world datasets contain hidden structure that cannot be detected by simple linear correlations between input features. For example, latent factors may influence the data in a coordinated way, even though their effect is invisible to covariance-based methods such as PCA. In practice, nonlinear neural networks often succeed in extracting such hidden structure in unsupervised and self-supervised learning. However, constructing a minimal high-dimensional model where this advantage can be rigorously analyzed has remained an open theoretical challenge. We introduce a tractable high-dimensional spiked model with two latent factors: one visible to covariance, and one statistically dependent yet uncorrelated, appearing only in higher-order moments. PCA and linear autoencoders fail to recover the latter, while a minimal nonlinear autoencoder provably extracts both. We analyze both the population risk, and empirical risk minimization. Our model also provides a tractable example where self-supervised test loss is poorly aligned with representation quality: nonlinear autoencoders recover latent structure that linear methods miss, even though their reconstruction loss is higher.
Problem

Research questions and friction points this paper is trying to address.

nonlinear autoencoders
hidden structure
PCA
high-dimensional model
higher-order moments
Innovation

Methods, ideas, or system contributions that make the work stand out.

nonlinear autoencoders
high-dimensional spiked model
higher-order moments
representation learning
test loss misalignment
V
Vicente Conde Mendes
Statistical Physics of Computation Laboratory, École polytechnique fédérale de Lausanne (EPFL) CH-1015 Lausanne
L
Lorenzo Bardone
Statistical Physics of Computation Laboratory, École polytechnique fédérale de Lausanne (EPFL) CH-1015 Lausanne
Cédric Koller
Cédric Koller
Ph.D. Student, EPFL
Statistical PhysicsDisordered SystemsGraphsDynamical SystemsNeural Networks
J
Jorge Medina Moreira
Statistical Physics of Computation Laboratory, École polytechnique fédérale de Lausanne (EPFL) CH-1015 Lausanne
Vittorio Erba
Vittorio Erba
EPFL, Lausanne
Statistical mechanics of combinatorial optimization problems and machine learning
Emanuele Troiani
Emanuele Troiani
École polytechnique fédérale de Lausanne
Lenka Zdeborová
Lenka Zdeborová
EPFL, Switzerland
statistical physicslearning theoryphase transitionsdeep learninghigh-dimensional statistics