🤖 AI Summary
This work investigates the training dynamics and loss landscape evolution of diagonal linear networks under isotropic Gaussian noise perturbations in linear regression. Method: We theoretically establish that such noise is equivalent to stochastic sharpness-aware minimization (SAM), explicitly parameterizing the contraction factor and threshold; this yields the first analytical mapping between noise intensity and contraction parameters. The noise enforces weight balancing via gradient expectation alignment and admits a closed-form solution for the contraction–threshold pair. Ultimately, the optimization jointly minimizes average sharpness and the Hessian trace, achieving an intrinsic balance between implicit regularization and parameter decomposition. Results: Experiments demonstrate improved robustness of convergence trajectories, reduced loss landscape sharpness, and superior generalization performance compared to baselines.
📝 Abstract
We analyze the landscape and training dynamics of diagonal linear networks in a linear regression task, with the network parameters being perturbed by small isotropic normal noise. The addition of such noise may be interpreted as a stochastic form of sharpness-aware minimization (SAM) and we prove several results that relate its action on the underlying landscape and training dynamics to the sharpness of the loss. In particular, the noise changes the expected gradient to force balancing of the weight matrices at a fast rate along the descent trajectory. In the diagonal linear model, we show that this equates to minimizing the average sharpness, as well as the trace of the Hessian matrix, among all possible factorizations of the same matrix. Further, the noise forces the gradient descent iterates towards a shrinkage-thresholding of the underlying true parameter, with the noise level explicitly regulating both the shrinkage factor and the threshold.