🤖 AI Summary
Analytical computation of the expected trace of rational expressions of random matrices—central to high-dimensional machine learning theory—remains intractable. Method: We introduce auto-fpt, the first fully automated symbolic computation framework for free probability theory (FPT), built on SymPy. It models operator algebras, performs asymptotic freeness analysis, and automatically derives and simplifies the corresponding fixed-point equations. Our approach is the first to fully symbolically formalize, systematize, and render reproducible the entire FPT computational pipeline; technically, it integrates linearization-based neural network modeling with efficient equation generation algorithms. Contribution/Results: auto-fpt successfully reproduces classical results—including high-dimensional error analysis of feedforward networks—demonstrating both correctness and scalability. By drastically lowering the barrier to applying free probability in ML theory, it establishes an extensible computational infrastructure for discovering novel theoretical phenomena.
📝 Abstract
A large part of modern machine learning theory often involves computing the high-dimensional expected trace of a rational expression of large rectangular random matrices. To symbolically compute such quantities using free probability theory, we introduce auto-fpt, a lightweight Python and SymPy-based tool that can automatically produce a reduced system of fixed-point equations which can be solved for the quantities of interest, and effectively constitutes a theory. We overview the algorithmic ideas underlying auto-fpt and its applications to various interesting problems, such as the high-dimensional error of linearized feed-forward neural networks, recovering well-known results. We hope that auto-fpt streamlines the majority of calculations involved in high-dimensional analysis, while helping the machine learning community reproduce known and uncover new phenomena.