Armijo Line-search Makes (Stochastic) Gradient Descent Go Fast

📅 2025-02-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the theoretical gap in characterizing the convergence rate of Armijo line search (Armijo-LS) in gradient descent. For the first time, it rigorously establishes that Armijo-LS achieves linear convergence under non-uniform smoothness and gradient dominance—outperforming fixed-step GD with step size $1/L$. Methodologically, it eliminates reliance on a global Lipschitz constant $L$ by introducing an adaptive step-size mechanism and extends the analysis to stochastic gradient descent coupled with stochastic line search. Key contributions are: (1) unified linear convergence guarantees for both convex and non-convex (gradient-dominated) objectives; (2) under interpolation, the stochastic variant attains acceleration comparable to deterministic Armijo-LS; and (3) empirical validation across logistic regression, multiclass classification, and policy gradient methods demonstrates improved convergence—from sublinear to linear—under Armijo-LS.

Technology Category

Application Category

📝 Abstract
Armijo line-search (Armijo-LS) is a standard method to set the step-size for gradient descent (GD). For smooth functions, Armijo-LS alleviates the need to know the global smoothness constant $L$ and adapts to the local smoothness, enabling GD to converge faster. However, existing theoretical analyses of GD with Armijo-LS (GD-LS) do not characterize this fast convergence. We show that if the objective function satisfies a certain non-uniform smoothness condition, GD-LS converges provably faster than GD with a constant $1/L$ step-size (denoted as GD(1/L)). Our results imply that for convex losses corresponding to logistic regression and multi-class classification, GD-LS can converge to the optimum at a linear rate and, hence, improve over the sublinear convergence of GD(1/L). Furthermore, for non-convex losses satisfying gradient domination (for example, those corresponding to the softmax policy gradient in RL or generalized linear models with a logistic link function), GD-LS can match the fast convergence of algorithms tailored for these specific settings. Finally, we prove that under the interpolation assumption, for convex losses, stochastic GD with a stochastic line-search can match the fast convergence of GD-LS.
Problem

Research questions and friction points this paper is trying to address.

Armijo line-search improves gradient descent convergence speed
GD-LS converges faster than GD(1/L) for non-uniform smoothness
Stochastic GD with line-search matches GD-LS convergence under interpolation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Armijo line-search adapts to local smoothness
GD-LS converges faster than GD(1/L)
Stochastic GD matches GD-LS convergence speed
🔎 Similar Papers
No similar papers found.