🤖 AI Summary
This paper investigates whether flatness implies generalization for single-variable two-layer ReLU networks trained with logistic loss—a fundamental question in deep learning theory. Combining rigorous theoretical analysis with controlled simulations, we characterize gradient descent trajectories, the structure of interpolation solutions, and generalization behavior under uncertainty set boundaries. We prove that flat minima within the interval between left and right uncertainty sets achieve near-optimal generalization bounds; however, we explicitly construct infinitely flat yet severely overfitting counterexamples, thereby rigorously refuting the sufficiency of flatness for generalization—the first such formal disproof for logistic loss. Experiments further reveal “false certainty”: high flatness coexisting with poor generalization due to spurious confidence in uncertain regions. Our work clarifies that, under logistic loss, the relationship between flatness and generalization is neither monotonic nor sufficient, providing both a critical counterexample and a fine-grained characterization essential for neural network generalization theory.
📝 Abstract
We consider the problem of generalization of arbitrarily overparameterized two-layer ReLU Neural Networks with univariate input. Recent work showed that under square loss, flat solutions (motivated by flat / stable minima and Edge of Stability phenomenon) provably cannot overfit, but it remains unclear whether the same phenomenon holds for logistic loss. This is a puzzling open problem because existing work on logistic loss shows that gradient descent with increasing step size converges to interpolating solutions (at infinity, for the margin-separable cases). In this paper, we prove that the emph{flatness implied generalization} is more delicate under logistic loss. On the positive side, we show that flat solutions enjoy near-optimal generalization bounds within a region between the left-most and right-most emph{uncertain} sets determined by each candidate solution. On the negative side, we show that there exist arbitrarily flat yet overfitting solutions at infinity that are (falsely) certain everywhere, thus certifying that flatness alone is insufficient for generalization in general. We demonstrate the effects predicted by our theory in a well-controlled simulation study.