🤖 AI Summary
This work challenges the conventional belief that flat minima inherently improve generalization, systematically investigating the generalization potential of sharp minima under regularization. We analyze loss landscape characteristics—specifically sharpness and functional complexity—of models trained with SAM, weight decay, and data augmentation on image classification tasks. Contrary to expectation, regularization often induces sharper minima that exhibit lower functional complexity. Empirically, unregularized models converge to flatter minima yet generalize poorly; in contrast, regularized sharp minima achieve superior generalization accuracy, predictive calibration, adversarial robustness, and functional consistency. Our key contribution is the proposition that “sharpness is a function-dependent neutral property”: rather than interpreting curvature in isolation, we advocate characterizing minima via functional complexity—a more principled measure of model capacity. This reframing shifts the theoretical lens on generalization, offering a new paradigm grounded in the interplay between optimization geometry and hypothesis complexity. (149 words)
📝 Abstract
Flat minima are widely believed to correlate with improved generalisation in deep neural networks. However, this connection has proven more nuanced in recent studies, with both theoretical counterexamples and empirical exceptions emerging in the literature. In this paper, we revisit the role of sharpness in model performance, proposing that sharpness is better understood as a function-dependent property rather than a reliable indicator of poor generalisation. We conduct extensive empirical studies, from single-objective optimisation to modern image classification tasks, showing that sharper minima often emerge when models are regularised (e.g., via SAM, weight decay, or data augmentation), and that these sharp minima can coincide with better generalisation, calibration, robustness, and functional consistency. Across a range of models and datasets, we find that baselines without regularisation tend to converge to flatter minima yet often perform worse across all safety metrics. Our findings demonstrate that function complexity, rather than flatness alone, governs the geometry of solutions, and that sharper minima can reflect more appropriate inductive biases (especially under regularisation), calling for a function-centric reappraisal of loss landscape geometry.