MERIT: Maximum-normalized Element-wise Ratio for Language Model Large-batch Training

📅 2025-08-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
During large-batch training of language models, a surge in the maximum attention logits induces an information bottleneck, degrading optimizer performance—particularly for AdamW. While LAMB partially alleviates this issue, its L2-norm-based trust ratio fails to effectively constrain the maximum values of query/key weights, and its row/column-wise scaling ignores intrinsic structural dependencies. This paper proposes MERIT, a novel optimizer that (1) introduces max-norm regularization on attention logits—the first such approach—and (2) designs element-wise trust ratios to enable fine-grained, structure-aware parameter updates. Evaluated on the GPT-2 family, MERIT enables stable training of GPT-2 Medium with batch size 6K, matching the final performance of conventional 480-batch training. It significantly improves convergence stability, training efficiency, and scalability, demonstrating robustness under extreme batch-size scaling.

Technology Category

Application Category

📝 Abstract
Large-batch training has become a cornerstone in accelerating the training of deep neural networks, yet it poses challenges in optimization and generalization. Existing optimizers like AdamW present performance degradation during language models' large-batch training, due to the information bottleneck in attention layers caused by the sharp increase of max attention logit. While the LAMB optimizer partially addresses this issue, some attention layers still face this issue. The reason is that $l_2$-norm-based trust ratios in LAMB are less effective in directly influencing the max value of query/key weights. Furthermore, the weight-wise trust ratio in LAMB is error-prone as it overlooks relationships of weight values within rows or columns. Building on these observations, we propose a novel optimizer, MERIT, which leverages the max-norm to calculate the trust ratio to constrain the max attention logit more effectively. Moreover, we further construct element-wise trust ratios to provide more robust update scaling by focusing on local weight structures. Extensive experiments of large-batch training across various sizes of GPT-2 models demonstrate the superior performance of MERIT. Notably, during the training of GPT-2 Medium, MERIT enables a 6k batch size without any performance degradation compared to the standard batch size (480) with 48B training tokens. This work highlights the importance of considering the max attention logit and finer-granularity trust ratio in large-batch training. It successfully improves the training stability and paves the way for larger batch usage, enabling faster development and iteration of large language models. Code is available at https://github.com/NUS-HPC-AI-Lab/MERIT.
Problem

Research questions and friction points this paper is trying to address.

Addresses performance degradation in large-batch language model training
Solves ineffective max attention logit control in existing optimizers
Improves trust ratio precision through element-wise scaling approach
Innovation

Methods, ideas, or system contributions that make the work stand out.

Max-norm trust ratio for attention logits
Element-wise ratios for local structures
Enables larger batch sizes without degradation
🔎 Similar Papers
No similar papers found.
Y
Yang Luo
School of Computing, National University of Singapore
Zangwei Zheng
Zangwei Zheng
Ph.D. of National University of Singapore
Machine LearningHigh Performance ComputingComputer Vision
Z
Ziheng Qin
School of Computing, National University of Singapore
Z
Zirui Zhu
School of Computing, National University of Singapore
Y
Yong Liu
School of Computing, National University of Singapore
Yang You
Yang You
Postdoc, Stanford University
3D visioncomputer graphicscomputational geometry