π€ AI Summary
This work addresses the significant accuracy degradation observed in existing quantization-aware training methods when performing end-to-end pretraining of large language models in the low-precision NVFP4 format, primarily due to its limited representational capacity. To overcome this challenge, the authors propose Quartet IIβthe first framework enabling full NVFP4-precision pretraining of large models. Its core innovation is the MS-EDEN unbiased quantization routine, which reduces gradient estimation error in both forward and backward passes by more than 2Γ compared to stochastic rounding. Integrated with the NVFP4 microscaling format, full linear-layer quantization, and custom kernels optimized for Blackwell GPUs, the approach demonstrates strong empirical performance: on a 1.9B-parameter model trained with 38B tokens, it achieves up to 4.2Γ speedup over BF16 training while maintaining comparable accuracy.
π Abstract
The NVFP4 lower-precision format, supported in hardware by NVIDIA Blackwell GPUs, promises to allow, for the first time, end-to-end fully-quantized pre-training of massive models such as LLMs. Yet, existing quantized training methods still sacrifice some of the representation capacity of this format in favor of more accurate unbiased quantized gradient estimation by stochastic rounding (SR), losing noticeable accuracy relative to standard FP16 and FP8 training. In this paper, improve the state of the art for quantized training in NVFP4 via a novel unbiased quantization routine for micro-scaled formats, called MS-EDEN, that has more than 2x lower quantization error than SR. We integrate it into a novel fully-NVFP4 quantization scheme for linear layers, called Quartet II. We show analytically that Quartet II achieves consistently better gradient estimation across all major matrix multiplications, both on the forward and on the backward passes. In addition, our proposal synergizes well with recent training improvements aimed specifically at NVFP4. We further validate Quartet II on end-to-end LLM training with up to 1.9B parameters on 38B tokens. We provide kernels for execution on NVIDIA Blackwell GPUs with up to 4.2x speedup over BF16. Our code is available at https://github.com/IST-DASLab/Quartet-II .