🤖 AI Summary
This work investigates the mechanistic relationship between loss landscape flatness at minima and generalization in deep neural networks. Addressing the unreliability of conventional flatness measures—such as Hessian eigenvalues—in overparameterized regimes, we propose a novel flatness metric based on the Hessian’s soft rank, which ensures both numerical stability and theoretical interpretability, and—uniquely—unifies flatness with model calibration. Methodologically, within the exponential-family negative log-likelihood loss framework, we estimate soft rank via the Hessian’s trace and spectral norm, and derive a theoretically grounded generalization gap bound using the Takeuchi Information Criterion. Empirically, our metric robustly and accurately predicts generalization gaps across both calibrated and uncalibrated models, significantly outperforming existing baselines—especially when models avoid overconfidence.
📝 Abstract
Recent literature has examined the relationship between the curvature of the loss function at minima and generalization, mainly in the context of overparameterized networks. A key observation is that "flat" minima tend to generalize better than "sharp" minima. While this idea is supported by empirical evidence, it has also been shown that deep networks can generalize even with arbitrary sharpness, as measured by either the trace or the spectral norm of the Hessian. In this paper, we argue that generalization could be assessed by measuring flatness using a soft rank measure of the Hessian. We show that when the common neural network model (neural network with exponential family negative log likelihood loss) is calibrated, and its prediction error and its confidence in the prediction are not correlated with the first and the second derivatives of the network's output, our measure accurately captures the asymptotic expected generalization gap. For non-calibrated models, we connect our flatness measure to the well-known Takeuchi Information Criterion and show that it still provides reliable estimates of generalization gaps for models that are not overly confident. Experimental results indicate that our approach offers a robust estimate of the generalization gap compared to baselines.