Generalization Below the Edge of Stability: The Role of Data Geometry

📅 2025-10-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates how data geometry governs implicit bias and determines generalization versus memorization in overparameterized two-layer ReLU networks operating near the edge of training stability. We introduce the “shatter” relation—a geometric constraint characterizing the interplay between data distribution (e.g., low-dimensional spherical or isotropic) and ReLU neuron activation thresholds—as a unifying criterion: when data are hard to shatter, gradient descent favors learning shared patterns and generalizes well; otherwise, it tends toward memorization. Through theoretical analysis of gradient descent dynamics on two-layer ReLU networks trained on data with varying intrinsic dimensions, we derive a spectrum of data-adaptive generalization bounds. This framework is the first to unify the generalization–memorization trade-off in overparameterized models via geometric separability, offering a novel theoretical perspective on implicit regularization.

Technology Category

Application Category

📝 Abstract
Understanding generalization in overparameterized neural networks hinges on the interplay between the data geometry, neural architecture, and training dynamics. In this paper, we theoretically explore how data geometry controls this implicit bias. This paper presents theoretical results for overparameterized two-layer ReLU networks trained below the edge of stability. First, for data distributions supported on a mixture of low-dimensional balls, we derive generalization bounds that provably adapt to the intrinsic dimension. Second, for a family of isotropic distributions that vary in how strongly probability mass concentrates toward the unit sphere, we derive a spectrum of bounds showing that rates deteriorate as the mass concentrates toward the sphere. These results instantiate a unifying principle: When the data is harder to "shatter" with respect to the activation thresholds of the ReLU neurons, gradient descent tends to learn representations that capture shared patterns and thus finds solutions that generalize well. On the other hand, for data that is easily shattered (e.g., data supported on the sphere) gradient descent favors memorization. Our theoretical results consolidate disparate empirical findings that have appeared in the literature.
Problem

Research questions and friction points this paper is trying to address.

Analyzing generalization in overparameterized neural networks with data geometry
Deriving generalization bounds for low-dimensional and isotropic data distributions
Establishing relationship between data shatterability and gradient descent memorization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Analyzes data geometry's role in generalization for overparameterized networks
Derives generalization bounds for low-dimensional and isotropic distributions
Shows gradient descent favors pattern learning over memorization
🔎 Similar Papers
No similar papers found.