Stacey: Promoting Stochastic Steepest Descent via Accelerated $ell_p$-Smooth Nonconvex Optimization

πŸ“… 2025-06-07
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Standard optimizers (e.g., SGD, AdamW, Lion) rely on β„“β‚‚- or β„“βˆž-norm-based steepest descent, which fails under the non-Euclidean geometric structures prevalent in deep neural network training. Method: We propose Staceyβ€”a novel accelerated stochastic steepest descent algorithm for non-Euclidean β„“β‚š-smooth nonconvex optimization with adaptive β„“β‚š-norm updates (p ∈ (1,2)), built upon an interpolation-type primal-dual iteration scheme. Contribution/Results: Stacey is the first method to establish provably accelerated convergence guarantees for β„“β‚š-smooth nonconvex optimizationβ€”resolving a long-standing theoretical gap. Empirically, it achieves faster convergence and higher final accuracy than state-of-the-art optimizers across image classification and large language model pretraining. Its core innovation lies in breaking the β„“β‚‚/β„“βˆž paradigm: by enabling task- and model-adaptive tuning of p, Stacey unlocks superior optimization dynamics tailored to intrinsic non-Euclidean geometries.

Technology Category

Application Category

πŸ“ Abstract
While popular optimization methods such as SGD, AdamW, and Lion depend on steepest descent updates in either $ell_2$ or $ell_infty$ norms, there remains a critical gap in handling the non-Euclidean structure observed in modern deep networks training. In this work, we address this need by introducing a new accelerated $ell_p$ steepest descent algorithm, called Stacey, which uses interpolated primal-dual iterate sequences to effectively navigate non-Euclidean smooth optimization tasks. In addition to providing novel theoretical guarantees for the foundations of our algorithm, we empirically compare our approach against these popular methods on tasks including image classification and language model (LLM) pretraining, demonstrating both faster convergence and higher final accuracy. We further evaluate different values of $p$ across various models and datasets, underscoring the importance and efficiency of non-Euclidean approaches over standard Euclidean methods. Code can be found at https://github.com/xinyuluo8561/Stacey .
Problem

Research questions and friction points this paper is trying to address.

Addresses non-Euclidean optimization in deep networks
Introduces accelerated β„“_p steepest descent algorithm
Improves convergence and accuracy in model training
Innovation

Methods, ideas, or system contributions that make the work stand out.

Accelerated $ell_p$ steepest descent algorithm
Interpolated primal-dual iterate sequences
Non-Euclidean smooth optimization tasks
πŸ”Ž Similar Papers
No similar papers found.