Disordered Dynamics in High Dimensions: Connections to Random Matrices and Machine Learning

📅 2026-01-03
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limited understanding of how disorder influences training and generalization in high-dimensional dynamical systems relevant to machine learning. By integrating dynamical mean-field theory (DMFT), random matrix theory, the cavity method, and path integrals, the authors reduce complex high-dimensional coupled systems to effective single-site stochastic processes driven by non-Hermitian random matrices. They uncover a novel mechanism—rooted in non-Hermitian structure—that leads to non-monotonic training loss dynamics in settings such as gradient flow, random feature models, and deep linear networks. A DMFT-based bias–variance decomposition is introduced via ensemble averaging over noise realizations, and the emergent spiked random matrix structure underlying feature learning in deep linear networks is characterized. Finally, asymptotic dynamics of both training and test losses are derived for high-dimensional random data, providing a theoretical foundation for quantifying strategies like ensemble learning.

Technology Category

Application Category

📝 Abstract
We provide an overview of high dimensional dynamical systems driven by random matrices, focusing on applications to simple models of learning and generalization in machine learning theory. Using both cavity method arguments and path integrals, we review how the behavior of a coupled infinite dimensional system can be characterized as a stochastic process for each single site of the system. We provide a pedagogical treatment of dynamical mean field theory (DMFT), a framework that can be flexibly applied to these settings. The DMFT single site stochastic process is fully characterized by a set of (two-time) correlation and response functions. For linear time-invariant systems, we illustrate connections between random matrix resolvents and the DMFT response. We demonstrate applications of these ideas to machine learning models such as gradient flow, stochastic gradient descent on random feature models and deep linear networks in the feature learning regime trained on random data. We demonstrate how bias and variance decompositions (analysis of ensembling/bagging etc) can be computed by averaging over subsets of the DMFT noise variables. From our formalism we also investigate how linear systems driven with random non-Hermitian matrices (such as random feature models) can exhibit non-monotonic loss curves with training time, while Hermitian matrices with the matching spectra do not, highlighting a different mechanism for non-monotonicity than small eigenvalues causing instability to label noise. Lastly, we provide asymptotic descriptions of the training and test loss dynamics for randomly initialized deep linear neural networks trained in the feature learning regime with high-dimensional random data. In this case, the time translation invariance structure is lost and the hidden layer weights are characterized as spiked random matrices.
Problem

Research questions and friction points this paper is trying to address.

disordered dynamics
random matrices
machine learning
high-dimensional systems
non-monotonic loss
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamical Mean Field Theory
Random Matrices
Non-Hermitian Dynamics
Bias-Variance Decomposition
Deep Linear Networks
🔎 Similar Papers
No similar papers found.