Stronger Normalization-Free Transformers

📅 2025-12-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Pointwise activation functions in normalization-free Transformers suffer from inadequate design, limiting performance and generalization. Method: This paper proposes Derf(x) = erf(αx + s), a learnable pointwise nonlinearity based on the Gaussian cumulative distribution function, identified via large-scale functional space search and intrinsic property-driven analysis. Contribution/Results: Derf is the first activation to consistently outperform LayerNorm, RMSNorm, and Dynamic Tanh (DyT) across diverse multimodal tasks—including image classification/generation, speech representation learning, and DNA modeling—without normalization layers. Empirical evaluation confirms that gains stem from improved generalization, not overfitting, while maintaining structural simplicity and strong robustness to hyperparameter variation and architectural changes. Theoretically grounded and empirically validated, Derf serves as a practical, efficient, and interpretable core component for normalization-free Transformer architectures, advancing the development of explicitly normalization-free models.

Technology Category

Application Category

📝 Abstract
Although normalization layers have long been viewed as indispensable components of deep learning architectures, the recent introduction of Dynamic Tanh (DyT) has demonstrated that alternatives are possible. The point-wise function DyT constrains extreme values for stable convergence and reaches normalization-level performance; this work seeks further for function designs that can surpass it. We first study how the intrinsic properties of point-wise functions influence training and performance. Building on these findings, we conduct a large-scale search for a more effective function design. Through this exploration, we introduce $mathrm{Derf}(x) = mathrm{erf}(alpha x + s)$, where $mathrm{erf}(x)$ is the rescaled Gaussian cumulative distribution function, and identify it as the most performant design. Derf outperforms LayerNorm, RMSNorm, and DyT across a wide range of domains, including vision (image recognition and generation), speech representation, and DNA sequence modeling. Our findings suggest that the performance gains of Derf largely stem from its improved generalization rather than stronger fitting capacity. Its simplicity and stronger performance make Derf a practical choice for normalization-free Transformer architectures.
Problem

Research questions and friction points this paper is trying to address.

Develops a point-wise function Derf to replace normalization layers in Transformers
Explores function designs that surpass Dynamic Tanh for stable convergence and performance
Evaluates Derf's generalization across vision, speech, and DNA modeling tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Derf function replaces normalization layers
Large-scale search identifies erf-based activation
Derf improves generalization across multiple domains
🔎 Similar Papers
No similar papers found.
M
Mingzhi Chen
Princeton University
T
Taiming Lu
Princeton University
J
Jiachen Zhu
NYU
Mingjie Sun
Mingjie Sun
Thinking Machines Lab
Z
Zhuang Liu
Princeton University