Pretraining Large Language Models with NVFP4

๐Ÿ“… 2025-09-29
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
To address the challenges of instability, poor convergence, and implementation complexity in large language model (LLM) pretraining under 4-bit floating-point (NVFP4) quantization, this work proposes the first stable and efficient FP4 pretraining framework. The framework systematically mitigates gradient distortion and numerical instability inherent to ultra-low-precision training through four key techniques: randomized Hadamard transform (RHT) for variance reduction, 2D forward/backward quantization for improved tensor-wise precision, stochastic rounding for unbiased gradient estimation, and selective high-precision layers for critical components. We successfully pretrain a 12B-parameter model on 10 trillion tokens using FP4, achieving training loss and downstream task performance on par with FP8 baselinesโ€”the longest publicly reported FP4 pretraining to date. This establishes a scalable, high-fidelity paradigm for ultra-large-scale, low-precision LLM training.

Technology Category

Application Category

๐Ÿ“ Abstract
Large Language Models (LLMs) today are powerful problem solvers across many domains, and they continue to get stronger as they scale in model size, training set size, and training set quality, as shown by extensive research and experimentation across the industry. Training a frontier model today requires on the order of tens to hundreds of yottaflops, which is a massive investment of time, compute, and energy. Improving pretraining efficiency is therefore essential to enable the next generation of even more capable LLMs. While 8-bit floating point (FP8) training is now widely adopted, transitioning to even narrower precision, such as 4-bit floating point (FP4), could unlock additional improvements in computational speed and resource utilization. However, quantization at this level poses challenges to training stability, convergence, and implementation, notably for large-scale models trained on long token horizons. In this study, we introduce a novel approach for stable and accurate training of large language models (LLMs) using the NVFP4 format. Our method integrates Random Hadamard transforms (RHT) to bound block-level outliers, employs a two-dimensional quantization scheme for consistent representations across both the forward and backward passes, utilizes stochastic rounding for unbiased gradient estimation, and incorporates selective high-precision layers. We validate our approach by training a 12-billion-parameter model on 10 trillion tokens -- the longest publicly documented training run in 4-bit precision to date. Our results show that the model trained with our NVFP4-based pretraining technique achieves training loss and downstream task accuracies comparable to an FP8 baseline. These findings highlight that NVFP4, when combined with our training approach, represents a major step forward in narrow-precision LLM training algorithms.
Problem

Research questions and friction points this paper is trying to address.

Enables stable 4-bit floating point training for LLMs
Addresses quantization challenges in large-scale model pretraining
Improves computational efficiency while maintaining model performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses NVFP4 format for efficient LLM training
Integrates Random Hadamard transforms to bound outliers
Employs two-dimensional quantization with stochastic rounding
๐Ÿ”Ž Similar Papers
No similar papers found.
F
Felix Abecassis
NVIDIA
A
Anjulie Agrusa
NVIDIA
D
Dong Ahn
NVIDIA
J
Jonah Alben
NVIDIA
S
Stefania Alborghetti
NVIDIA
M
Michael Andersch
NVIDIA
S
Sivakumar Arayandi
NVIDIA
A
Alexis Bjorlin
NVIDIA
A
Aaron Blakeman
NVIDIA
E
Evan Briones
NVIDIA
I
Ian Buck
NVIDIA
Bryan Catanzaro
Bryan Catanzaro
NVIDIA
Parallel ComputingMachine Learning
Jinhang Choi
Jinhang Choi
Nvidia
Computer ArchitectureDesign AutomationMachine Learning
M
Mike Chrzanowski
NVIDIA
E
Eric Chung
NVIDIA
V
Victor Cui
NVIDIA
Steve Dai
Steve Dai
NVIDIA Research
Deep LearningElectronic Design AutomationReconfigurable Computing
Bita Darvish Rouhani
Bita Darvish Rouhani
Distinguished Engineer, NVIDIA
Generative AIAI SupercomputingSystems for AI
C
Carlo del Mundo
NVIDIA
D
Deena Donia
NVIDIA
B
Burc Eryilmaz
NVIDIA
H
Henry Estela
NVIDIA
Abhinav Goel
Abhinav Goel
NVIDIA
Artificial IntelligenceMachine LearningSystems Engineering
O
Oleg Goncharov
NVIDIA