🤖 AI Summary
Low-precision full quantization fine-tuning of large language models (LLMs) suffers from severe accuracy degradation due to weight/activation outliers.
Method: This work introduces the first practical FP8 full-quantization fine-tuning framework. It proposes a novel Hadamard rotation preprocessing to suppress outliers, integrated with quantization-aware training (QAT), FSDP-based low-precision communication optimization, and custom FP8 CUDA kernels—enabling stable FP8 computation across all large matrix multiplications in both forward and backward passes. The framework is compatible with standard fine-tuning and parameter-efficient fine-tuning (PEFT) without requiring additional hyperparameter tuning.
Contribution/Results: On LLaMA-family models, it achieves lossless FP8 fine-tuning accuracy while delivering 1.31× end-to-end speedup on RTX 4090. This work provides the first empirical validation of feasibility, stability, and efficiency of full-quantization LLM fine-tuning.
📝 Abstract
Quantized training of Large Language Models (LLMs) remains an open challenge, as maintaining accuracy while performing all matrix multiplications in low precision has proven difficult. This is particularly the case when fine-tuning pre-trained models, which often already have large weight and activation outlier values that render quantized optimization difficult. We present HALO, a novel quantization-aware training approach for Transformers that enables accurate and efficient low-precision training by combining 1) strategic placement of Hadamard rotations in both forward and backward passes, to mitigate outliers during the low-precision computation, 2) FSDP integration for low-precision communication, and 3) high-performance kernel support. Our approach ensures that all large matrix multiplications during the forward and backward passes are executed in lower precision. Applied to LLAMA-family models, HALO achieves near-full-precision-equivalent results during fine-tuning on various tasks, while delivering up to 1.31x end-to-end speedup for full fine-tuning on RTX 4090 GPUs. Our method supports both standard and parameter-efficient fine-tuning (PEFT) methods, both backed by efficient kernel implementations. Our results demonstrate the first practical approach to fully quantized LLM fine-tuning that maintains accuracy in FP8 precision, while delivering performance benefits.