Progressive Depth Up-scaling via Optimal Transport

📅 2025-08-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing deep expansion methods for large language models typically initialize新增 layers via weight copying or averaging, neglecting neuron permutation discrepancies across layers—leading to inter-layer misalignment and performance degradation. This paper proposes OpT-DeUS, the first approach to incorporate Optimal Transport (OT) theory into weight initialization for depth expansion. It performs neuron-level alignment and weighted fusion between adjacent Transformer blocks, enabling structural-aware, progressive depth expansion. By mitigating neuron permutation inconsistency across layers, OpT-DeUS significantly accelerates convergence and improves final performance in both continual pretraining and fine-tuning. Experiments across multiple model scales demonstrate that OpT-DeUS consistently boosts accuracy; moreover, by inserting new layers near the top, it shortens the backward propagation path, yielding additional training speedup.

Technology Category

Application Category

📝 Abstract
Scaling Large Language Models (LLMs) yields performance gains but incurs substantial training costs. Depth up-scaling offers training efficiency by adding new layers to pre-trained models. However, most existing methods copy or average weights from base layers, neglecting neuron permutation differences. This limitation can potentially cause misalignment that harms performance. Inspired by applying Optimal Transport (OT) for neuron alignment, we propose Optimal Transport Depth Up-Scaling (OpT-DeUS). OpT-DeUS aligns and fuses Transformer blocks in adjacent base layers via OT for new layer creation, to mitigate neuron permutation mismatch between layers. OpT-DeUS achieves better overall performance and offers improved training efficiency than existing methods for continual pre-training and supervised fine-tuning across different model sizes. To further evaluate the impact of interpolation positions, our extensive analysis shows that inserting new layers closer to the top results in higher training efficiency due to shorter back-propagation time while obtaining additional performance gains.
Problem

Research questions and friction points this paper is trying to address.

Addresses neuron permutation mismatch in depth up-scaling
Reduces training costs while scaling Large Language Models
Improves alignment of Transformer blocks via Optimal Transport
Innovation

Methods, ideas, or system contributions that make the work stand out.

Optimal Transport aligns neuron permutation differences
Fuses Transformer blocks for new layer creation
Inserts layers near top for efficiency gains
🔎 Similar Papers
No similar papers found.
Mingzi Cao
Mingzi Cao
University of Sheffield
Natural Language ProcessingMachine Learning
X
Xi Wang
University of Sheffield
N
Nikolaos Aletras
University of Sheffield