Allocation of Parameters in Transformers

📅 2025-10-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In Transformers, uneven allocation of parameters—specifically attention head count and head dimension—across layers leads to suboptimal computational efficiency. Method: We propose a cross-layer differentiated parameterization strategy. Theoretically, we analyze information flow via function approximation and first reveal Softmax saturation in deeper layers, proving diminishing returns from increasing head dimension; this yields a principled trade-off model between head count and head dimension under fixed parameter budgets. Contribution/Results: Experiments show early layers benefit from more heads for enhanced feature extraction, while later layers tolerate substantial reductions in both head count and head dimension without performance degradation. Our strategy reduces computational cost by 15–25% while preserving accuracy, providing an interpretable theoretical foundation and practical configuration guidelines for efficient large-scale Transformer design.

Technology Category

Application Category

📝 Abstract
Transformers have achieved remarkable successes across a wide range of applications, yet the theoretical foundation of their model efficiency remains underexplored. In this work, we investigate how the model parameters -- mainly attention heads and head dimensions -- should be allocated across layers to balance expressivity and efficiency. We first provide mathematical analysis on the role of early layers in information extraction from an approximation perspective, with a theoretical characterization on the trade-off between the number of heads and head dimension under a fixed parameter budget. In addition, we uncover and prove the emph{saturation} behavior of softmax activations: Continuously increasing head dimensions can lead to diminishing returns in learning errors, particularly for long sequences. Supported by both theory and experiments, this saturation pattern suggests that later layers can operate more efficiently with reduced parameters. Combining these insights, we propose principled strategies for allocating attention heads and dimensions across Transformers' layers, shedding light on theoretically-grounded model efficiency of Transformer-based architectures.
Problem

Research questions and friction points this paper is trying to address.

Optimizing attention head allocation across Transformer layers
Analyzing head dimension saturation effects on sequence learning
Balancing parameter efficiency with model expressivity theoretically
Innovation

Methods, ideas, or system contributions that make the work stand out.

Mathematical analysis of early layer information extraction
Proved saturation behavior of softmax activations
Proposed principled parameter allocation strategies across layers
🔎 Similar Papers
No similar papers found.
R
Ruoxi Yu
Center for Data Science, Peking University
H
Haotian Jiang
Institute for Functional Intelligent Materials, National University of Singapore
J
Jingpu Cheng
Department of Mathematics, National University of Singapore
P
Penghao Yu
Department of Mathematics, National University of Singapore
Qianxiao Li
Qianxiao Li
Assistant Professor, Department of Mathematics and Institute for Functional Intelligent Materials
applied mathematicsmachine learningscientific computingcontrol theorymaterials science
Z
Zhong Li
Microsoft Research Asia