A Function Centric Perspective On Flat and Sharp Minima

📅 2025-10-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work challenges the conventional belief that flat minima inherently improve generalization, systematically investigating the generalization potential of sharp minima under regularization. We analyze loss landscape characteristics—specifically sharpness and functional complexity—of models trained with SAM, weight decay, and data augmentation on image classification tasks. Contrary to expectation, regularization often induces sharper minima that exhibit lower functional complexity. Empirically, unregularized models converge to flatter minima yet generalize poorly; in contrast, regularized sharp minima achieve superior generalization accuracy, predictive calibration, adversarial robustness, and functional consistency. Our key contribution is the proposition that “sharpness is a function-dependent neutral property”: rather than interpreting curvature in isolation, we advocate characterizing minima via functional complexity—a more principled measure of model capacity. This reframing shifts the theoretical lens on generalization, offering a new paradigm grounded in the interplay between optimization geometry and hypothesis complexity. (149 words)

Technology Category

Application Category

📝 Abstract
Flat minima are widely believed to correlate with improved generalisation in deep neural networks. However, this connection has proven more nuanced in recent studies, with both theoretical counterexamples and empirical exceptions emerging in the literature. In this paper, we revisit the role of sharpness in model performance, proposing that sharpness is better understood as a function-dependent property rather than a reliable indicator of poor generalisation. We conduct extensive empirical studies, from single-objective optimisation to modern image classification tasks, showing that sharper minima often emerge when models are regularised (e.g., via SAM, weight decay, or data augmentation), and that these sharp minima can coincide with better generalisation, calibration, robustness, and functional consistency. Across a range of models and datasets, we find that baselines without regularisation tend to converge to flatter minima yet often perform worse across all safety metrics. Our findings demonstrate that function complexity, rather than flatness alone, governs the geometry of solutions, and that sharper minima can reflect more appropriate inductive biases (especially under regularisation), calling for a function-centric reappraisal of loss landscape geometry.
Problem

Research questions and friction points this paper is trying to address.

Revisiting sharp minima's role as function-dependent property rather than generalization indicator
Investigating how sharper minima emerge with regularization and correlate with better performance
Demonstrating function complexity governs solution geometry more than flatness alone
Innovation

Methods, ideas, or system contributions that make the work stand out.

Sharpness as function-dependent property not generalization indicator
Regularization induces sharp minima with improved model performance
Function complexity governs solution geometry over flatness alone
I
Israel Mason-Williams
UKRI Safe and Trustd AI, Imperial and King’s College London, London, United Kingdom
G
Gabryel Mason-Williams
Queen Mary University of London, London, United Kingdom
Helen Yannakoudakis
Helen Yannakoudakis
Senior Lecturer, King’s College London
Machine learningnatural language processing