Does Flatness imply Generalization for Logistic Loss in Univariate Two-Layer ReLU Network?

📅 2025-12-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper investigates whether flatness implies generalization for single-variable two-layer ReLU networks trained with logistic loss—a fundamental question in deep learning theory. Combining rigorous theoretical analysis with controlled simulations, we characterize gradient descent trajectories, the structure of interpolation solutions, and generalization behavior under uncertainty set boundaries. We prove that flat minima within the interval between left and right uncertainty sets achieve near-optimal generalization bounds; however, we explicitly construct infinitely flat yet severely overfitting counterexamples, thereby rigorously refuting the sufficiency of flatness for generalization—the first such formal disproof for logistic loss. Experiments further reveal “false certainty”: high flatness coexisting with poor generalization due to spurious confidence in uncertain regions. Our work clarifies that, under logistic loss, the relationship between flatness and generalization is neither monotonic nor sufficient, providing both a critical counterexample and a fine-grained characterization essential for neural network generalization theory.

Technology Category

Application Category

📝 Abstract
We consider the problem of generalization of arbitrarily overparameterized two-layer ReLU Neural Networks with univariate input. Recent work showed that under square loss, flat solutions (motivated by flat / stable minima and Edge of Stability phenomenon) provably cannot overfit, but it remains unclear whether the same phenomenon holds for logistic loss. This is a puzzling open problem because existing work on logistic loss shows that gradient descent with increasing step size converges to interpolating solutions (at infinity, for the margin-separable cases). In this paper, we prove that the emph{flatness implied generalization} is more delicate under logistic loss. On the positive side, we show that flat solutions enjoy near-optimal generalization bounds within a region between the left-most and right-most emph{uncertain} sets determined by each candidate solution. On the negative side, we show that there exist arbitrarily flat yet overfitting solutions at infinity that are (falsely) certain everywhere, thus certifying that flatness alone is insufficient for generalization in general. We demonstrate the effects predicted by our theory in a well-controlled simulation study.
Problem

Research questions and friction points this paper is trying to address.

Generalization of overparameterized two-layer ReLU networks with logistic loss
Flatness alone insufficient for generalization under logistic loss
Existence of flat but overfitting solutions at infinity
Innovation

Methods, ideas, or system contributions that make the work stand out.

Flatness generalization under logistic loss
Uncertain sets define near-optimal bounds
Arbitrarily flat overfitting solutions exist
🔎 Similar Papers
No similar papers found.