Training Deep Learning Models with Norm-Constrained LMOs

📅 2025-02-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses key challenges in deep learning optimization—insufficient geometric awareness, poor cross-model hyperparameter transferability, and high memory overhead—by proposing a norm-ball-constrained stochastic optimization framework based on the Linear Minimization Oracle (LMO). It is the first to successfully extend LMO to *unconstrained* deep learning training, introducing a geometry-adaptive update rule and an explicit norm-selection mechanism tailored to deep architectures, thereby enabling hyperparameter transfer across model scales. The algorithm requires only one half-precision copy of weights and gradients, integrating LMO under norm constraints with low-precision gradient storage. In nanoGPT training, it achieves acceleration comparable to Adam while drastically reducing memory footprint. Key contributions include: (i) the pioneering application of LMO to unconstrained deep optimization; (ii) a unified design paradigm encompassing multiple optimizer families; and (iii) a lightweight, geometry-aware training paradigm that is both computationally efficient and memory frugal.

Technology Category

Application Category

📝 Abstract
In this work, we study optimization methods that leverage the linear minimization oracle (LMO) over a norm-ball. We propose a new stochastic family of algorithms that uses the LMO to adapt to the geometry of the problem and, perhaps surprisingly, show that they can be applied to unconstrained problems. The resulting update rule unifies several existing optimization methods under a single framework. Furthermore, we propose an explicit choice of norm for deep architectures, which, as a side benefit, leads to the transferability of hyperparameters across model sizes. Experimentally, we demonstrate significant speedups on nanoGPT training without any reliance on Adam. The proposed method is memory-efficient, requiring only one set of model weights and one set of gradients, which can be stored in half-precision.
Problem

Research questions and friction points this paper is trying to address.

Optimize deep learning with norm-constrained LMOs
Unify existing methods under a single framework
Enable hyperparameter transferability across model sizes
Innovation

Methods, ideas, or system contributions that make the work stand out.

Norm-constrained LMO optimization
Unified stochastic algorithm framework
Memory-efficient half-precision training
🔎 Similar Papers
No similar papers found.