Gated Removal of Normalization in Transformers Enables Stable Training and Efficient Inference

๐Ÿ“… 2026-02-11
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work investigates the necessity of sample-dependent normalization in pre-normalized Transformers and proposes TaperNorm, a plug-and-play dynamic normalization alternative. TaperNorm initially mimics standard normalization during early training and then smoothly transitions to a sample-independent linear mapping via an EMA-calibrated global gating mechanism coupled with cosine annealing scheduling. The scaling parameters are fused into adjacent linear layers to enhance inference efficiency. This approach is the first to dynamically remove normalization during training without performance degradation, revealing that normalizationโ€™s primary role is to provide a scale anchor that prevents unbounded logit growth. A fixed-target auxiliary loss is introduced as a replacement. Experiments show that TaperNorm maintains training effectiveness while eliminating per-token statistics, achieving up to a 1.22ร— speedup in last-token inference throughput.

Technology Category

Application Category

๐Ÿ“ Abstract
Normalization is widely viewed as essential for stabilizing Transformer training. We revisit this assumption for pre-norm Transformers and ask to what extent sample-dependent normalization is needed inside Transformer blocks. We introduce TaperNorm, a drop-in replacement for RMSNorm/LayerNorm that behaves exactly like the standard normalizer early in training and then smoothly tapers to a learned sample-independent linear/affine map. A single global gate is held at $g{=}1$ during gate warmup, used to calibrate the scaling branch via EMAs, and then cosine-decayed to $g{=}0$, at which point per-token statistics vanish and the resulting fixed scalings can be folded into adjacent linear projections. Our theoretical and empirical results isolate scale anchoring as the key role played by output normalization: as a (near) $0$-homogeneous map it removes radial gradients at the output, whereas without such an anchor cross-entropy encourages unbounded logit growth (``logit chasing''). We further show that a simple fixed-target auxiliary loss on the pre-logit residual-stream scale provides an explicit alternative anchor and can aid removal of the final normalization layer. Empirically, TaperNorm matches normalized baselines under identical setups while eliminating per-token statistics and enabling these layers to be folded into adjacent linear projections at inference. On an efficiency microbenchmark, folding internal scalings yields up to $1.22\times$ higher throughput in last-token logits mode. These results take a step towards norm-free Transformers while identifying the special role output normalization plays.
Problem

Research questions and friction points this paper is trying to address.

Normalization
Transformers
Stable Training
Efficient Inference
Sample-dependent Statistics
Innovation

Methods, ideas, or system contributions that make the work stand out.

TaperNorm
Normalization-free Transformers
Scale anchoring
Efficient inference
Logit chasing
๐Ÿ”Ž Similar Papers
No similar papers found.
A
Andrei Kanavalau
Department of Electrical Engineering, Stanford University, Stanford, USA
C
Carmen Amo Alonso
Department of Computer Science, Stanford University, Stanford, USA
Sanjay Lall
Sanjay Lall
Stanford University
ControlOptimizationSignal Processing