🤖 AI Summary
Existing deep expansion methods for large language models typically initialize新增 layers via weight copying or averaging, neglecting neuron permutation discrepancies across layers—leading to inter-layer misalignment and performance degradation. This paper proposes OpT-DeUS, the first approach to incorporate Optimal Transport (OT) theory into weight initialization for depth expansion. It performs neuron-level alignment and weighted fusion between adjacent Transformer blocks, enabling structural-aware, progressive depth expansion. By mitigating neuron permutation inconsistency across layers, OpT-DeUS significantly accelerates convergence and improves final performance in both continual pretraining and fine-tuning. Experiments across multiple model scales demonstrate that OpT-DeUS consistently boosts accuracy; moreover, by inserting new layers near the top, it shortens the backward propagation path, yielding additional training speedup.
📝 Abstract
Scaling Large Language Models (LLMs) yields performance gains but incurs substantial training costs. Depth up-scaling offers training efficiency by adding new layers to pre-trained models. However, most existing methods copy or average weights from base layers, neglecting neuron permutation differences. This limitation can potentially cause misalignment that harms performance. Inspired by applying Optimal Transport (OT) for neuron alignment, we propose Optimal Transport Depth Up-Scaling (OpT-DeUS). OpT-DeUS aligns and fuses Transformer blocks in adjacent base layers via OT for new layer creation, to mitigate neuron permutation mismatch between layers. OpT-DeUS achieves better overall performance and offers improved training efficiency than existing methods for continual pre-training and supervised fine-tuning across different model sizes. To further evaluate the impact of interpolation positions, our extensive analysis shows that inserting new layers closer to the top results in higher training efficiency due to shorter back-propagation time while obtaining additional performance gains.