🤖 AI Summary
This paper investigates the emergence mechanism of abstract representations—i.e., task-relevant variables encoded in approximately orthogonal, low-dimensional subspaces—in neural networks.
Method: We formulate a mean-field analytical framework that maps weight optimization to the evolution of pre-activation neuron distributions, modeling training of finite-width ReLU networks as a variational problem over probability measures; this framework is rigorously extended to general activation functions and deep architectures.
Contribution/Results: We prove that, under task-driven learning, every global optimum of feedforward nonlinear networks necessarily admits a structurally consistent, low-dimensional, disentangled representation. Theoretical analysis establishes the universality of such abstract representations across diverse network architectures and activation functions at global minima. Our work provides the first analytically tractable and broadly applicable mathematical framework for understanding representation learning in both biological and artificial neural systems.
📝 Abstract
Recent experiments reveal that task-relevant variables are often encoded in approximately orthogonal subspaces of the neural activity space. These disentangled low-dimensional representations are observed in multiple brain areas and across different species, and are typically the result of a process of abstraction that supports simple forms of out-of-distribution generalization. The mechanisms by which such geometries emerge remain poorly understood, and the mechanisms that have been investigated are typically unsupervised (e.g., based on variational auto-encoders). Here, we show mathematically that abstract representations of latent variables are guaranteed to appear in the last hidden layer of feedforward nonlinear networks when they are trained on tasks that depend directly on these latent variables. These abstract representations reflect the structure of the desired outputs or the semantics of the input stimuli. To investigate the neural representations that emerge in these networks, we develop an analytical framework that maps the optimization over the network weights into a mean-field problem over the distribution of neural preactivations. Applying this framework to a finite-width ReLU network, we find that its hidden layer exhibits an abstract representation at all global minima of the task objective. We further extend these analyses to two broad families of activation functions and deep feedforward architectures, demonstrating that abstract representations naturally arise in all these scenarios. Together, these results provide an explanation for the widely observed abstract representations in both the brain and artificial neural networks, as well as a mathematically tractable toolkit for understanding the emergence of different kinds of representations in task-optimized, feature-learning network models.