đ¤ AI Summary
Deploying large language models (LLMs) across diverse hardware constraints necessitates multiple model sizes, but conventional approachesâtraining or distilling each size independentlyâincur high computational costs and yield coarse-grained, inflexible model families.
Method: This paper proposes a zero-shot model-size interpolation framework grounded in knowledge distillation and layer-block alignment. Leveraging the âboomerang distillationâ phenomenonâwhere knowledge distilled from a large model to a small one enables reverse interpolation to reconstruct intermediate-sized modelsâwe eliminate the need for additional training. Layer-block reorganization and structured pruning are jointly employed to achieve fine-grained, smooth performance scaling.
Contribution/Results: The interpolated intermediate models match or surpass dedicated training and distillation baselines at equivalent parameter counts across diverse benchmarks. Experiments demonstrate strong effectiveness, generalization across architectures and tasks, and enhanced deployment flexibilityâenabling on-the-fly adaptation to heterogeneous hardware without retraining.
đ Abstract
Large language models (LLMs) are typically deployed under diverse memory and compute constraints. Existing approaches build model families by training each size independently, which is prohibitively expensive and provides only coarse-grained size options. In this work, we identify a novel phenomenon that we call boomerang distillation: starting from a large base model (the teacher), one first distills down to a small student and then progressively reconstructs intermediate-sized models by re-incorporating blocks of teacher layers into the student without any additional training. This process produces zero-shot interpolated models of many intermediate sizes whose performance scales smoothly between the student and teacher, often matching or surpassing pretrained or distilled models of the same size. We further analyze when this type of interpolation succeeds, showing that alignment between teacher and student through pruning and distillation is essential. Boomerang distillation thus provides a simple and efficient way to generate fine-grained model families, dramatically reducing training cost while enabling flexible adaptation across deployment environments. The code and models are available at https://github.com/dcml-lab/boomerang-distillation.