🤖 AI Summary
This work investigates how shallow nonlinear networks map data with low-dimensional subspace structure—such as natural images—into linearly separable features, aiming to establish a theoretical foundation for the classification capability of deep networks. We propose and rigorously prove that a single-layer network with random weights and quadratic activation achieves high-probability class separation when the hidden layer width grows only polynomially with the data’s intrinsic dimension—thus decoupling separability guarantees from the ambient input dimension. Our analysis integrates union-of-subspaces (UoS) modeling, random matrix theory, and probabilistic concentration arguments. Numerical experiments confirm that the derived theoretical conditions align closely with empirical performance. The key contribution is the first intrinsic-dimension-driven guarantee of linear separability for shallow nonlinear networks, bridging the gap between the empirical success of deep learning and its theoretical interpretability.
📝 Abstract
Deep neural networks have attained remarkable success across diverse classification tasks. Recent empirical studies have shown that deep networks learn features that are linearly separable across classes. However, these findings often lack rigorous justifications, even under relatively simple settings. In this work, we address this gap by examining the linear separation capabilities of shallow nonlinear networks. Specifically, inspired by the low intrinsic dimensionality of image data, we model inputs as a union of low-dimensional subspaces (UoS) and demonstrate that a single nonlinear layer can transform such data into linearly separable sets. Theoretically, we show that this transformation occurs with high probability when using random weights and quadratic activations. Notably, we prove this can be achieved when the network width scales polynomially with the intrinsic dimension of the data rather than the ambient dimension. Experimental results corroborate these theoretical findings and demonstrate that similar linear separation properties hold in practical scenarios beyond our analytical scope. This work bridges the gap between empirical observations and theoretical understanding of the separation capacity of nonlinear networks, offering deeper insights into model interpretability and generalization.