Don't be lazy: CompleteP enables compute-efficient deep transformers

📅 2025-05-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses two critical bottlenecks in large language model (LLM) training arising from parameterization strategies: (1) poor transferability of base hyperparameters (e.g., learning rate) across model depths, necessitating costly re-tuning; and (2) “lazy learning” induced by certain parameterizations, causing deep networks to degenerate into near-linear transformations and undermining nonlinear representational capacity. To resolve these issues, we propose CompleteP—a novel parameterization framework that jointly achieves depth-scalable hyperparameter transferability and fully non-lazy learning across all layers. Its core components include: theoretically grounded parameter scaling laws, a formal criterion for non-lazy learning, and a depth-adaptive joint modulation mechanism for learning rates and optimizer hyperparameters. Experiments demonstrate that CompleteP improves computational efficiency by 12–34% at equivalent task performance, enables more flexible width–depth trade-offs, and significantly reduces training overhead for large-scale Transformers.

Technology Category

Application Category

📝 Abstract
We study compute efficiency of LLM training when using different parameterizations, i.e., rules for adjusting model and optimizer hyperparameters (HPs) as model size changes. Some parameterizations fail to transfer optimal base HPs (such as learning rate) across changes in model depth, requiring practitioners to either re-tune these HPs as they scale up (expensive), or accept sub-optimal training when re-tuning is prohibitive. Even when they achieve HP transfer, we develop theory to show parameterizations may still exist in the lazy learning regime where layers learn only features close to their linearization, preventing effective use of depth and nonlinearity. Finally, we identify and adopt the unique parameterization we call CompleteP that achieves both depth-wise HP transfer and non-lazy learning in all layers. CompleteP enables a wider range of model width/depth ratios to remain compute-efficient, unlocking shapes better suited for different hardware settings and operational contexts. Moreover, CompleteP enables 12-34% compute efficiency improvements over the prior state-of-the-art.
Problem

Research questions and friction points this paper is trying to address.

Studying compute efficiency in LLM training with different parameterizations
Addressing sub-optimal training due to failed hyperparameter transfer across model sizes
Overcoming lazy learning to enable effective use of depth and nonlinearity
Innovation

Methods, ideas, or system contributions that make the work stand out.

CompleteP ensures depth-wise hyperparameter transfer
CompleteP avoids lazy learning in all layers
CompleteP improves compute efficiency by 12-34%
🔎 Similar Papers
No similar papers found.