🤖 AI Summary
This work addresses the challenge of uncovering hidden structures in high-dimensional data that manifest only in higher-order statistics and remain invisible to second-order methods like PCA. The authors construct an analytically tractable high-dimensional two-factor latent variable model, wherein one latent factor is detectable solely through higher-order moments. Within this rigorously solvable framework, they establish—for the first time—that nonlinear autoencoders can successfully recover such latent structure missed by linear approaches. Both theoretical analysis and empirical experiments demonstrate that, despite exhibiting higher reconstruction loss than linear models, the representations learned by nonlinear autoencoders are more informative, thereby revealing a potential misalignment between reconstruction error and representation quality.
📝 Abstract
Many real-world datasets contain hidden structure that cannot be detected by simple linear correlations between input features. For example, latent factors may influence the data in a coordinated way, even though their effect is invisible to covariance-based methods such as PCA. In practice, nonlinear neural networks often succeed in extracting such hidden structure in unsupervised and self-supervised learning. However, constructing a minimal high-dimensional model where this advantage can be rigorously analyzed has remained an open theoretical challenge. We introduce a tractable high-dimensional spiked model with two latent factors: one visible to covariance, and one statistically dependent yet uncorrelated, appearing only in higher-order moments. PCA and linear autoencoders fail to recover the latter, while a minimal nonlinear autoencoder provably extracts both. We analyze both the population risk, and empirical risk minimization. Our model also provides a tractable example where self-supervised test loss is poorly aligned with representation quality: nonlinear autoencoders recover latent structure that linear methods miss, even though their reconstruction loss is higher.