Adaptive Memory Momentum via a Model-Based Framework for Deep Learning Optimization

📅 2025-10-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Traditional optimizers (e.g., SGD, AdamW) employ fixed momentum coefficients (e.g., β = 0.9), limiting convergence speed and training stability. To address this, we propose Adaptive Memory Momentum (AMM), a novel momentum mechanism that constructs a dual-plane approximation of the objective function using both current and historical gradient memories. Crucially, AMM is the first momentum-based optimizer to incorporate a model-based proximal framework—without introducing additional hyperparameters—enabling online, adaptive adjustment of the momentum coefficient. AMM operates within standard first-order gradient optimization and integrates seamlessly into existing SGD or AdamW frameworks as a drop-in replacement. Extensive experiments demonstrate that AMM consistently outperforms hand-tuned baseline optimizers across convex optimization benchmarks and diverse large-scale deep learning tasks, delivering superior convergence rates, enhanced training stability, and improved generalization performance.

Technology Category

Application Category

📝 Abstract
The vast majority of modern deep learning models are trained with momentum-based first-order optimizers. The momentum term governs the optimizer's memory by determining how much each past gradient contributes to the current convergence direction. Fundamental momentum methods, such as Nesterov Accelerated Gradient and the Heavy Ball method, as well as more recent optimizers such as AdamW and Lion, all rely on the momentum coefficient that is customarily set to $β= 0.9$ and kept constant during model training, a strategy widely used by practitioners, yet suboptimal. In this paper, we introduce an extit{adaptive memory} mechanism that replaces constant momentum with a dynamic momentum coefficient that is adjusted online during optimization. We derive our method by approximating the objective function using two planes: one derived from the gradient at the current iterate and the other obtained from the accumulated memory of the past gradients. To the best of our knowledge, such a proximal framework was never used for momentum-based optimization. Our proposed approach is novel, extremely simple to use, and does not rely on extra assumptions or hyperparameter tuning. We implement adaptive memory variants of both SGD and AdamW across a wide range of learning tasks, from simple convex problems to large-scale deep learning scenarios, demonstrating that our approach can outperform standard SGD and Adam with hand-tuned momentum coefficients. Finally, our work opens doors for new ways of inducing adaptivity in optimization.
Problem

Research questions and friction points this paper is trying to address.

Replacing constant momentum with dynamic adaptive memory mechanism
Adjusting momentum coefficient online during deep learning optimization
Improving optimization without extra assumptions or hyperparameter tuning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adaptive memory replaces constant momentum dynamically
Approximates objective with current gradient and past memory
Simple implementation outperforms standard SGD and AdamW
🔎 Similar Papers
No similar papers found.
K
Kristi Topollai
New York University
Anna Choromanska
Anna Choromanska
New York University
machine learning