Rethinking Optimization and Architecture for Tiny Language Models

📅 2024-02-05
🏛️ International Conference on Machine Learning
📈 Citations: 12
Influential: 0
📄 PDF
🤖 AI Summary
To address computational and memory bottlenecks in deploying Tiny Language Models (Tiny LMs) on mobile devices, this work systematically re-engineers the neural architecture, parameter initialization, and training strategy of 1B-scale models. We propose three core contributions: (1) the first empirical validation of synergistic gains from tokenizer compression, parameter inheritance, and multi-phase training; (2) a lightweight scaling formula specifically designed for small models, departing from conventional LLM scaling paradigms; and (3) an evidence-driven ablation framework integrating architectural fine-tuning, inheritance-based initialization, multi-stage optimization, and tokenizer lightweighting. Trained on a 1.6T multilingual corpus, PanGu-π-1B Pro achieves an average +8.87-point improvement across mainstream benchmarks; its 1.5B variant surpasses several larger state-of-the-art models. All code is publicly released.

Technology Category

Application Category

📝 Abstract
The power of large language models (LLMs) has been demonstrated through numerous data and computing resources. However, the application of language models on mobile devices is facing huge challenge on the computation and memory costs, that is, tiny language models with high performance are urgently required. Limited by the highly complex training process, there are many details for optimizing language models that are seldom studied carefully. In this study, based on a tiny language model with 1B parameters, we carefully design a series of empirical study to analyze the effect of each component. Three perspectives are mainly discussed, ie, neural architecture, parameter initialization, and optimization strategy. Several design formulas are empirically proved especially effective for tiny language models, including tokenizer compression, architecture tweaking, parameter inheritance and multiple-round training. Then we train PanGu-$pi$-1B Pro and PanGu-$pi$-1.5B Pro on 1.6T multilingual corpora, following the established formulas. Experimental results demonstrate the improved optimization and architecture yield a notable average improvement of 8.87 on benchmark evaluation sets for PanGu-$pi$-1B Pro. Besides, PanGu-$pi$-1.5B Pro surpasses a range of SOTA models with larger model sizes, validating its superior performance. The code is available at https://github.com/YuchuanTian/RethinkTinyLM.
Problem

Research questions and friction points this paper is trying to address.

Optimizing tiny language models for mobile devices
Analyzing neural architecture, initialization, and optimization strategies
Improving performance of 1B-parameter models via design formulas
Innovation

Methods, ideas, or system contributions that make the work stand out.

Tokenizer compression for efficiency
Architecture tweaking enhances performance
Parameter inheritance boosts training
🔎 Similar Papers
No similar papers found.
Yehui Tang
Yehui Tang
Shanghai Jiao Tong University
Machine LearningQuantum AI & AI4Science
Fangcheng Liu
Fangcheng Liu
Huawei Noah's Ark Lab, Peking University
LLMsGenerative AIAdversarials
Y
Yunsheng Ni
Huawei Noah’s Ark Lab
Y
Yuchuan Tian
Huawei Noah’s Ark Lab, Peking University
Z
Zheyuan Bai
Huawei Noah’s Ark Lab
Yi-Qi Hu
Yi-Qi Hu
Consumer Business Group, Huawei
S
Sichao Liu
Consumer Business Group, Huawei
S
Shangling Jui
Huawei Kirin Solution
K
Kai Han
Huawei Noah’s Ark Lab
Yunhe Wang
Yunhe Wang
Noah's Ark Lab, Huawei Technologies
Deep LearningLanguage ModelMachine LearningComputer Vision