🤖 AI Summary
This work investigates feature learning mechanisms for high-dimensional non-Gaussian data, focusing on the interplay between Independent Component Analysis (ICA) and Stochastic Gradient Descent (SGD) in representation learning. Theoretically, it establishes the first rigorous sample complexity bound of $O(d^4)$ for FastICA and demonstrates that SGD with loss smoothing achieves the optimal rate $O(d^2)$; leveraging random matrix theory and information geometry, it characterizes the intrinsic coupling between optimization dynamics and data non-Gaussianity. Empirically, on ImageNet, FastICA exhibits a search-phase transition, confirming that strong non-Gaussianity compensates for statistical inefficiency. Collectively, the results unify the sample complexity boundaries of both algorithms and provide the first interpretable statistical learning principle explaining the emergence of first-layer filters in deep networks.
📝 Abstract
Deep neural networks learn structured features from complex, non-Gaussian inputs, but the mechanisms behind this process remain poorly understood. Our work is motivated by the observation that the first-layer filters learnt by deep convolutional neural networks from natural images resemble those learnt by independent component analysis (ICA), a simple unsupervised method that seeks the most non-Gaussian projections of its inputs. This similarity suggests that ICA provides a simple, yet principled model for studying feature learning. Here, we leverage this connection to investigate the interplay between data structure and optimisation in feature learning for the most popular ICA algorithm, FastICA, and stochastic gradient descent (SGD), which is used to train deep networks. We rigorously establish that FastICA requires at least $ngtrsim d^4$ samples to recover a single non-Gaussian direction from $d$-dimensional inputs on a simple synthetic data model. We show that vanilla online SGD outperforms FastICA, and prove that the optimal sample complexity $n gtrsim d^2$ can be reached by smoothing the loss, albeit in a data-dependent way. We finally demonstrate the existence of a search phase for FastICA on ImageNet, and discuss how the strong non-Gaussianity of said images compensates for the poor sample complexity of FastICA.