A simple connection from loss flatness to compressed neural representations

📅 2023-10-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates how loss function sharpness—defined as local curvature in parameter space—affects the geometric structure of neural network feature spaces, with emphasis on its intrinsic relationship to neural representation compressibility. We propose three differential-geometric compression metrics: local volume ratio, maximum local sensitivity, and local intrinsic dimension, and derive universal upper bounds linking sharpness to each metric. Our analysis establishes, for the first time from a feature-space geometric perspective, theoretical constraints whereby sharper minima induce more compressed representations; these bounds are provably invariant under reparameterization-invariant sharpness definitions. Methodologically, we integrate linear stability analysis, local geometric modeling, and rigorous inequality derivation. Empirical validation across MLPs, CNNs, and Transformers confirms that flatter minima yield less compressed representations, and our theoretical bounds accurately capture the strong positive correlation between sharpness and all three compression measures.
📝 Abstract
Sharpness, a geometric measure in the parameter space that reflects the flatness of the loss landscape, has long been studied for its potential connections to neural network behavior. While sharpness is often associated with generalization, recent work highlights inconsistencies in this relationship, leaving its true significance unclear. In this paper, we investigate how sharpness influences the local geometric features of neural representations in feature space, offering a new perspective on its role. We introduce this problem and study three measures for compression: the Local Volumetric Ratio (LVR), based on volume compression, the Maximum Local Sensitivity (MLS), based on sensitivity to input changes, and the Local Dimensionality, based on how uniform the sensitivity is on different directions. We show that LVR and MLS correlate with the flatness of the loss around the local minima; and that this correlation is predicted by a relatively simple mathematical relationship: a flatter loss corresponds to a lower upper bound on the compression metrics of neural representations. Our work builds upon the linear stability insight by Ma and Ying, deriving inequalities between various compression metrics and quantities involving sharpness. Our inequalities readily extend to reparametrization-invariant sharpness as well. Through empirical experiments on various feedforward, convolutional, and transformer architectures, we find that our inequalities predict a consistently positive correlation between local representation compression and sharpness.
Problem

Research questions and friction points this paper is trying to address.

Investigates how sharpness affects neural representation geometry
Introduces three compression measures linked to loss flatness
Derives inequalities between compression metrics and sharpness
Innovation

Methods, ideas, or system contributions that make the work stand out.

Links loss flatness to neural representation compression
Introduces Local Volumetric Ratio for compression
Derives inequalities between compression metrics and sharpness
S
Shirui Chen
Department of Applied Mathematics, Computational Neuroscience Center, University of Washington
Stefano Recanatesi
Stefano Recanatesi
Technion, Israel Institute of Technology
Computational NeuroscienceMachine LearningNeuro-AINetwork Dynamics
E
Eric Shea-Brown
Department of Applied Mathematics, Computational Neuroscience Center, University of Washington, Allen Institute