๐ค AI Summary
To address the challenges of instability, poor convergence, and implementation complexity in large language model (LLM) pretraining under 4-bit floating-point (NVFP4) quantization, this work proposes the first stable and efficient FP4 pretraining framework. The framework systematically mitigates gradient distortion and numerical instability inherent to ultra-low-precision training through four key techniques: randomized Hadamard transform (RHT) for variance reduction, 2D forward/backward quantization for improved tensor-wise precision, stochastic rounding for unbiased gradient estimation, and selective high-precision layers for critical components. We successfully pretrain a 12B-parameter model on 10 trillion tokens using FP4, achieving training loss and downstream task performance on par with FP8 baselinesโthe longest publicly reported FP4 pretraining to date. This establishes a scalable, high-fidelity paradigm for ultra-large-scale, low-precision LLM training.
๐ Abstract
Large Language Models (LLMs) today are powerful problem solvers across many domains, and they continue to get stronger as they scale in model size, training set size, and training set quality, as shown by extensive research and experimentation across the industry. Training a frontier model today requires on the order of tens to hundreds of yottaflops, which is a massive investment of time, compute, and energy. Improving pretraining efficiency is therefore essential to enable the next generation of even more capable LLMs. While 8-bit floating point (FP8) training is now widely adopted, transitioning to even narrower precision, such as 4-bit floating point (FP4), could unlock additional improvements in computational speed and resource utilization. However, quantization at this level poses challenges to training stability, convergence, and implementation, notably for large-scale models trained on long token horizons.
In this study, we introduce a novel approach for stable and accurate training of large language models (LLMs) using the NVFP4 format. Our method integrates Random Hadamard transforms (RHT) to bound block-level outliers, employs a two-dimensional quantization scheme for consistent representations across both the forward and backward passes, utilizes stochastic rounding for unbiased gradient estimation, and incorporates selective high-precision layers. We validate our approach by training a 12-billion-parameter model on 10 trillion tokens -- the longest publicly documented training run in 4-bit precision to date. Our results show that the model trained with our NVFP4-based pretraining technique achieves training loss and downstream task accuracies comparable to an FP8 baseline. These findings highlight that NVFP4, when combined with our training approach, represents a major step forward in narrow-precision LLM training algorithms.