Scaling FP8 training to trillion-token LLMs

πŸ“… 2024-09-19
πŸ›οΈ arXiv.org
πŸ“ˆ Citations: 5
✨ Influential: 1
πŸ“„ PDF
πŸ€– AI Summary
This work addresses outlier amplification and long-term training instability in large language models (LLMs) trained to 2 trillion tokens at FP8 precision, primarily induced by SwiGLU activations. Methodologically: (1) we identify weight alignment in SwiGLU as a key cause of FP8 numerical instability; (2) we propose Smooth-SwiGLUβ€”a numerically stable variant that preserves the original function’s expressiveness while suppressing activation outliers; and (3) we introduce the first fully FP8 quantization of both first- and second-order moments in the Adam optimizer. Leveraging the Megatron-DeepSpeed framework on Intel Gaudi2 hardware, we successfully train a 7B LLM for 2 trillion tokens in pure FP8 across 256 accelerators, matching BF16 baseline accuracy while achieving a 34% throughput improvement. The implementation is open-sourced.

Technology Category

Application Category

πŸ“ Abstract
We train, for the first time, large language models using FP8 precision on datasets up to 2 trillion tokens -- a 20-fold increase over previous limits. Through these extended training runs, we uncover critical instabilities in FP8 training that were not observable in earlier works with shorter durations. We trace these instabilities to outlier amplification by the SwiGLU activation function. Interestingly, we show, both analytically and empirically, that this amplification happens only over prolonged training periods, and link it to a SwiGLU weight alignment process. To address this newly identified issue, we introduce Smooth-SwiGLU, a novel modification that ensures stable FP8 training without altering function behavior. We also demonstrate, for the first time, FP8 quantization of both Adam optimizer moments. Combining these innovations, we successfully train a 7B parameter model using FP8 precision on 256 Intel Gaudi2 accelerators, achieving on-par results with the BF16 baseline while delivering up to a $sim 34 %$ throughput improvement. A reference implementation is supplied in https://github.com/Anonymous1252022/Megatron-DeepSpeed.
Problem

Research questions and friction points this paper is trying to address.

FP8 precision instability in training
SwiGLU activation function amplification
FP8 quantization for Adam optimizer
Innovation

Methods, ideas, or system contributions that make the work stand out.

FP8 precision training
Smooth-SwiGLU modification
FP8 quantization Adam optimizer
πŸ”Ž Similar Papers
No similar papers found.
M
Maxim Fishman
Habana Labs – An Intel company, Caesarea, Israel
B
Brian Chmiel
Habana Labs – An Intel company, Caesarea, Israel
R
Ron Banner
Habana Labs – An Intel company, Caesarea, Israel
Daniel Soudry
Daniel Soudry
Associate Professor
Neural NetworksMachine LearningTheoretical neuroscience