A Local Polyak-Lojasiewicz and Descent Lemma of Gradient Descent For Overparametrized Linear Models

📅 2025-05-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper investigates the linear convergence of gradient descent for overparameterized two-layer linear neural networks, aiming to eliminate reliance on traditional assumptions—namely, infinite width, infinitesimal step sizes, and specialized initialization. Addressing the challenge that classical non-convex convergence conditions (e.g., the global Polyak–Łojasiewicz (PL) condition and descent lemma) fail in overparameterized settings, we establish, for the first time, that both conditions hold locally near the optimal solution under general loss functions. We derive explicit, computable bounds for the local PL and descent constants, expressed in terms of initialization quality, current loss value, and intrinsic model constants. Consequently, we obtain a linear convergence rate under significantly relaxed assumptions, provide theoretical guidance for adaptive step-size selection, and validate our findings through comprehensive numerical experiments.

Technology Category

Application Category

📝 Abstract
Most prior work on the convergence of gradient descent (GD) for overparameterized neural networks relies on strong assumptions on the step size (infinitesimal), the hidden-layer width (infinite), or the initialization (large, spectral, balanced). Recent efforts to relax these assumptions focus on two-layer linear networks trained with the squared loss. In this work, we derive a linear convergence rate for training two-layer linear neural networks with GD for general losses and under relaxed assumptions on the step size, width, and initialization. A key challenge in deriving this result is that classical ingredients for deriving convergence rates for nonconvex problems, such as the Polyak-{L}ojasiewicz (PL) condition and Descent Lemma, do not hold globally for overparameterized neural networks. Here, we prove that these two conditions hold locally with local constants that depend on the weights. Then, we provide bounds on these local constants, which depend on the initialization of the weights, the current loss, and the global PL and smoothness constants of the non-overparameterized model. Based on these bounds, we derive a linear convergence rate for GD. Our convergence analysis not only improves upon prior results but also suggests a better choice for the step size, as verified through our numerical experiments.
Problem

Research questions and friction points this paper is trying to address.

Analyzing convergence of gradient descent for overparameterized linear models
Relaxing strong assumptions on step size, width, and initialization
Proving local Polyak-Lojasiewicz and Descent Lemma conditions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Proves local Polyak-Lojasiewicz and Descent Lemma conditions
Bounds local constants based on initialization and loss
Derives linear convergence rate for gradient descent
🔎 Similar Papers
No similar papers found.