🤖 AI Summary
To address forward/backward divergence and degraded inference performance caused by NVFP4 low-precision quantization in large language model (LLM) training, this paper proposes the Four Over Six (4/6) quantization method. The core innovation lies in dynamically evaluating two candidate scaling factors per data block, prioritizing representation accuracy near the maximum absolute value region to substantially mitigate performance degradation dominated by FP4’s inherent quantization error. Built upon adaptive block-wise floating-point scaling, 4/6 is compatible with both forward and backward passes, as well as multiple post-training quantization paradigms, and features an optimized implementation for NVIDIA Blackwell architectures. Experiments demonstrate stable convergence during Transformer- and hybrid-architecture pretraining, with training loss closely matching BF16 baselines and consistent improvements in downstream task accuracy.
📝 Abstract
As large language models have grown larger, low-precision numerical formats such as NVFP4 have become increasingly popular due to the speed and memory benefits they provide. However, to accelerate computation with NVFP4, all matrix multiplication operands--weights and activations in the forward pass, and weights, activations, and gradients in the backward pass--must be quantized to NVFP4, often leading to divergence during training and performance degradation during inference. NVFP4 by evaluating multiple potential scale factors for each block of values. To address this issue, in this work we introduce Four Over Six (4/6), a modification to the NVFP4 quantization algorithm that evaluates two potential scale factors for each block of values. Unlike integer formats, floating-point formats such as FP4 have the most quantization error on near-maximal values in each block, which we find to be primarily responsible for downstream performance degradation. We find that for some blocks, scaling to smaller FP4 values makes the distribution of representable values more uniform, improving representation of near-maximal values. Importantly, 4/6 can be implemented efficiently on NVIDIA Blackwell GPUs, making it viable to use while training LLMs with NVFP4. In pre-training experiments with transformer and hybrid model architectures, we find that 4/6 prevents divergence in several cases, bringing training loss significantly closer to BF16 compared to models trained with current state-of-the-art NVFP4 training recipes. We also find that 4/6 can be easily incorporated into many different post-training quantization methods and generally improves downstream accuracy. We hope this inspires future work in training and deploying models with NVFP4.