🤖 AI Summary
This paper studies accelerated convergence of gradient descent (GD) for ℓ₂-regularized logistic regression under linearly separable data. Addressing the limitation of conventional small step sizes—which yield only an Õ(κ) iteration complexity—we propose and rigorously analyze a large-step-size GD scheme. We establish, for the first time, that with appropriately chosen large steps, although the objective function is not monotonically decreasing, the iteration complexity improves from Õ(κ) to Õ(√κ). We derive precise characterizations of the maximal step size ensuring local convergence and extend our analysis to population risk minimization. Moreover, we tighten the optimal upper bound on the number of iterations required to reach an approximately optimal solution in the separable regime. Our key innovation lies in breaking the monotonicity paradigm: we identify critical conditions for global and local convergence under large steps, thereby providing both theoretical foundations and practical guidance for efficient optimization of separable machine learning models.
📝 Abstract
We study gradient descent (GD) with a constant stepsize for $ell_2$-regularized logistic regression with linearly separable data. Classical theory suggests small stepsizes to ensure monotonic reduction of the optimization objective, achieving exponential convergence in $widetilde{mathcal{O}}(kappa)$ steps with $kappa$ being the condition number. Surprisingly, we show that this can be accelerated to $widetilde{mathcal{O}}(sqrt{kappa})$ by simply using a large stepsize -- for which the objective evolves nonmonotonically. The acceleration brought by large stepsizes extends to minimizing the population risk for separable distributions, improving on the best-known upper bounds on the number of steps to reach a near-optimum. Finally, we characterize the largest stepsize for the local convergence of GD, which also determines the global convergence in special scenarios. Our results extend the analysis of Wu et al. (2024) from convex settings with minimizers at infinity to strongly convex cases with finite minimizers.