Where to Begin: Efficient Pretraining via Subnetwork Selection and Distillation

📅 2025-10-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the high computational cost and large data requirements of pretraining small language models (SLMs), this paper proposes an efficient pretraining framework integrating structured subnet selection with knowledge distillation. Methodologically, it (1) employs evolutionary search to automatically identify high-performing, structurally sparse subnetworks for initialization—replacing conventional random initialization; (2) leverages logits and hidden representations from a large language model (LLM) for knowledge distillation, accelerating convergence and improving downstream performance; and (3) incorporates a parameter inheritance mechanism to further enhance training efficiency. Experimental results demonstrate that the proposed method achieves validation perplexity comparable to Pythia-scale SLMs while requiring only 1/9.2 the number of pretraining tokens. This yields substantial reductions in computational overhead and establishes a new paradigm for resource-efficient SLM pretraining, particularly suitable for compute-constrained environments.

Technology Category

Application Category

📝 Abstract
Small Language models (SLMs) offer an efficient and accessible alternative to Large Language Models (LLMs), delivering strong performance while using far fewer resources. We introduce a simple and effective framework for pretraining SLMs that brings together three complementary ideas. First, we identify structurally sparse sub-network initializations that consistently outperform randomly initialized models of similar size under the same compute budget. Second, we use evolutionary search to automatically discover high-quality sub-network initializations, providing better starting points for pretraining. Third, we apply knowledge distillation from larger teacher models to speed up training and improve generalization. Together, these components make SLM pretraining substantially more efficient: our best model, discovered using evolutionary search and initialized with LLM weights, matches the validation perplexity of a comparable Pythia SLM while requiring 9.2x fewer pretraining tokens. We release all code and models at https://github.com/whittle-org/whittle/, offering a practical and reproducible path toward cost-efficient small language model development at scale.
Problem

Research questions and friction points this paper is trying to address.

Identifies optimal subnetwork initializations for efficient SLM pretraining
Uses evolutionary search to discover high-quality pretraining starting points
Applies knowledge distillation to accelerate training and improve generalization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Identifies structurally sparse sub-network initializations for efficiency
Uses evolutionary search to discover high-quality sub-network initializations
Applies knowledge distillation from larger teacher models
🔎 Similar Papers
No similar papers found.