$mu$nit Scaling: Simple and Scalable FP8 LLM Training

📅 2025-02-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address instability, reliance on dynamic scaling, and tedious hyperparameter tuning in FP8 training—stemming from reduced numerical precision—this paper proposes μnit Scaling: a first-principles-driven, static scaling strategy derived from numerical analysis of Transformer operators. It eliminates dynamic scaling entirely, requires no specialized hyperparameter adjustments, and ensures numerical consistency between training and inference. Applied to all hidden-layer linear projections in 1B–13B-parameter LLMs, μnit Scaling enables full FP8 computation while matching the convergence quality of FP16/FP32 baselines and accelerating training by up to 33%. Experiments demonstrate its transferability across model widths and ease of scalability. This work establishes a simple, robust, and practical paradigm for large-scale low-precision LLM training.

Technology Category

Application Category

📝 Abstract
Large Language Model training with 8-bit floating point (FP8) formats promises significant efficiency improvements, but reduced numerical precision makes training challenging. It is currently possible to train in FP8 only if one is willing to tune various hyperparameters, reduce model scale, or accept the overhead of computing dynamic scale factors. We demonstrate simple, scalable FP8 training that requires no dynamic scaling factors or special hyperparameters, even at large model sizes. Our method, $mu$nit Scaling ($mu$S), also enables simple hyperparameter transfer across model widths, matched numerics across training and inference, and other desirable properties. $mu$nit Scaling is straightforward to implement, consisting of a set of minimal interventions based on a first-principles analysis of common transformer operations. We validate our method by training models from 1B to 13B parameters, performing all hidden linear layer computations in FP8. We achieve quality equal to higher precision baselines while also training up to 33% faster.
Problem

Research questions and friction points this paper is trying to address.

FP8 training efficiency
No dynamic scaling factors
Hyperparameter transfer across models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Simple FP8 training without dynamic scaling
Minimal interventions for transformer operations
Equal quality with higher precision baselines
S
Saaketh Narayan
Databricks Mosaic Research, San Francisco, CA
A
Abhay Gupta
Databricks Mosaic Research, San Francisco, CA
Mansheej Paul
Mansheej Paul
Research Scientist, Databricks
Davis Blalock
Davis Blalock
Research Scientist, Databricks
Deep LearningEfficient Machine LearningQuantizationCompression