Understanding Quantization of Optimizer States in LLM Pre-training: Dynamics of State Staleness and Effectiveness of State Resets

📅 2026-03-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the issue of optimizer state stagnation in large language model pretraining under low-precision quantization, where rounding errors impair the adaptivity of quantized optimizer states. The study systematically analyzes the quantization behavior of low-precision exponential moving average (EMA) states and proposes a reset-based mechanism to restore optimizer responsiveness. A predictive model of state stagnation is developed to elucidate the efficacy of the reset strategy, leading to a theoretically grounded derivation of the optimal reset interval. Through a combination of quantization analysis, stochastic process modeling, and controlled experiments, the approach is validated both in simulation and real-world LLM pretraining, demonstrating substantial recovery of performance loss while significantly reducing optimizer memory overhead.

Technology Category

Application Category

📝 Abstract
Quantizing optimizer states is becoming an important ingredient of memory-efficient large-scale pre-training, but the resulting optimizer dynamics remain only partially understood. We study low-precision exponential moving average (EMA) optimizer states and show how quantization can cause many nominal updates to round back to the same stored value, making the state effectively stale and slowing adaptation beyond what the nominal decay would suggest. We then develop a simple predictive model of stalling that estimates one-step stalling probabilities and characterizes how stalling builds up over time after the initialization. This perspective provides a mechanistic explanation for why optimizer-state resets help in low precision: once a quantized EMA becomes effectively stale, resetting it can temporarily restore responsiveness. Motivated by this picture, we derive a simple theory-guided method for choosing useful reset periods, showing that in low precision the key question is not only whether resets help, but when they should be applied. Experiments in controlled simulations and LLM pre-training show that suitable reset schedules recover the performance lost to low-precision state storage while substantially reducing optimizer-state memory.
Problem

Research questions and friction points this paper is trying to address.

quantization
optimizer states
state staleness
LLM pre-training
low-precision training
Innovation

Methods, ideas, or system contributions that make the work stand out.

optimizer state quantization
state staleness
state reset
low-precision training
LLM pre-training
🔎 Similar Papers
No similar papers found.
K
Kristi Topollai
New York University
Anna Choromanska
Anna Choromanska
New York University
machine learning