Efficient Construction of Model Family through Progressive Training Using Model Expansion

📅 2025-04-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Constructing multi-scale model families for large language model (LLM) deployment incurs high training costs and suffers from behavioral inconsistency across sizes. Method: This paper proposes a progressive family training paradigm: starting from a small base model, it systematically expands architecture—via structured operations such as layer duplication and growth in attention heads/dimensions—to derive a 1B–8B model series, eliminating redundant independent training. It introduces two key innovations: (i) size-adaptive maximum learning rate scheduling, and (ii) cross-size joint optimization, both designed to enhance behavioral consistency across scales. Results: Experiments demonstrate a 25% reduction in training cost at comparable performance levels, consistent improvements over independently trained baselines on multiple benchmarks (e.g., MMLU, GSM8K), and significantly more stable output distributions across model sizes.

Technology Category

Application Category

📝 Abstract
As Large Language Models (LLMs) gain widespread practical application, providing the model family of different parameter sizes has become standard practice to address diverse computational requirements. Conventionally, each model in a family is trained independently, resulting in computational costs that scale additively with the number of models. We propose an efficient method for constructing the model family through progressive training, where smaller models are incrementally expanded to larger sizes to create a complete model family. Through extensive experiments with a model family ranging from 1B to 8B parameters, we demonstrate that our method reduces computational costs by approximately 25% while maintaining comparable performance to independently trained models. Furthermore, by strategically adjusting maximum learning rates based on model size, our method outperforms the independent training across various metrics. Beyond performance gains, our approach offers an additional advantage: models in our family tend to yield more consistent behavior across different model sizes.
Problem

Research questions and friction points this paper is trying to address.

Reduces computational costs for training model families
Improves performance consistency across different model sizes
Enables efficient progressive expansion from smaller to larger models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Progressive training with model expansion
Adjusting learning rates by model size
Consistent behavior across model sizes
🔎 Similar Papers
No similar papers found.