🤖 AI Summary
This work investigates the intrinsic relationship between parameter magnitude distributions and the principal eigensubspaces of the loss Hessian in deep neural network training. We propose a matrix-free algorithm based on sketching-accelerated SVD, enabling efficient computation of over 1,000 leading Hessian eigenpairs on models with tens of millions of parameters—the first such scalability. To quantify geometric alignment, we define an overlap metric on the Grassmann manifold between parameter-magnitude masks and top Hessian subspaces. Experiments reveal that large-magnitude parameters concentrate strongly along high-curvature directions, and this alignment intensifies with model scale—uncovering a geometric origin of structural stability during early training. Our contributions include: (i) a novel theoretical perspective linking parameter sparsity patterns to Hessian geometry; (ii) scalable computational tools for second-order analysis; and (iii) implications for Hessian approximation, pruning interpretability, and second-order optimization.
📝 Abstract
Recently, it has been observed that when training a deep neural net with SGD, the majority of the loss landscape's curvature quickly concentrates in a tiny *top* eigenspace of the loss Hessian, which remains largely stable thereafter. Independently, it has been shown that successful magnitude pruning masks for deep neural nets emerge early in training and remain stable thereafter. In this work, we study these two phenomena jointly and show that they are connected: We develop a methodology to measure the similarity between arbitrary parameter masks and Hessian eigenspaces via Grassmannian metrics. We identify *overlap* as the most useful such metric due to its interpretability and stability. To compute *overlap*, we develop a matrix-free algorithm based on sketched SVDs that allows us to compute over 1000 Hessian eigenpairs for nets with over 10M parameters --an unprecedented scale by several orders of magnitude. Our experiments reveal an *overlap* between magnitude parameter masks and top Hessian eigenspaces consistently higher than chance-level, and that this effect gets accentuated for larger network sizes. This result indicates that *top Hessian eigenvectors tend to be concentrated around larger parameters*, or equivalently, that *larger parameters tend to align with directions of larger loss curvature*. Our work provides a methodology to approximate and analyze deep learning Hessians at scale, as well as a novel insight on the structure of their eigenspace.