PRISM: Distribution-free Adaptive Computation of Matrix Functions for Accelerating Neural Network Training

📅 2026-01-29
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the computational inefficiency of matrix functions—such as square roots, inverse roots, and orthogonalization—in neural network training, which stems from traditional iterative methods’ reliance on prior spectral information and their inability to adapt to dynamically changing matrix spectra. The authors propose PRISM, a novel framework that enables adaptive computation of matrix functions without requiring any prior knowledge of the spectrum. PRISM constructs, at each iteration, a polynomial surrogate of the current spectrum using random sketching and relies predominantly on GPU-friendly matrix multiplications. This approach automatically adapts to spectral shifts during training, substantially reducing computational overhead. When integrated into Shampoo and Muon optimizers, PRISM maintains optimization accuracy while significantly decreasing both iteration counts and wall-clock runtime.

Technology Category

Application Category

📝 Abstract
Matrix functions such as square root, inverse roots, and orthogonalization play a central role in preconditioned gradient methods for neural network training. This has motivated the development of iterative algorithms that avoid explicit eigendecompositions and rely primarily on matrix multiplications, making them well suited for modern GPU accelerators. We present PRISM (Polynomial-fitting and Randomized Iterative Sketching for Matrix functions computation), a general framework for accelerating iterative algorithms for computing matrix functions. PRISM combines adaptive polynomial approximation with randomized sketching: at each iteration, it fits a polynomial surrogate to the current spectrum via a sketched least-squares problem, adapting to the instance at hand with minimal overhead. We apply PRISM to accelerate Newton-Schulz-like iterations for matrix square roots and orthogonalization, which are core primitives in machine learning. Unlike prior methods, PRISM requires no explicit spectral bounds or singular value estimates; and it adapts automatically to the evolving spectrum. Empirically, PRISM accelerates training when integrated into Shampoo and Muon optimizers.
Problem

Research questions and friction points this paper is trying to address.

matrix functions
neural network training
preconditioned gradient methods
adaptive computation
distribution-free
Innovation

Methods, ideas, or system contributions that make the work stand out.

matrix functions
adaptive polynomial approximation
randomized sketching
distribution-free
neural network optimization
🔎 Similar Papers
No similar papers found.