Accelerating Large Language Model Pretraining via LFR Pedagogy: Learn, Focus, and Review

📅 2024-09-10
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address inefficiency, high computational cost, and severe knowledge forgetting caused by random sampling in large language model (LLM) pretraining, this paper proposes the dynamic adaptive Learn-Focus-Review (LFR) pedagogical paradigm. LFR is the first to formalize human cognitive principles—particularly spaced repetition—as a differentiable training scheduling mechanism. It enables real-time data prioritization via token-block-level learning state tracking, quantification of forgetting sensitivity, and lightweight online evaluation. Departing from static data streaming, LFR achieves superior performance on SlimPajama and OpenWebText: pretrained Llama/GPT variants surpass full-data baselines using only 5%–19% of training tokens; with merely 3.2% of tokens, it matches the performance of a Pythia model with twice the parameters. Downstream tasks show an average accuracy gain of 3.7% and a 12.4% reduction in perplexity.

Technology Category

Application Category

📝 Abstract
Traditional Large Language Model (LLM) pretraining relies on autoregressive language modeling with randomly sampled data from web-scale datasets. Inspired by human learning techniques like spaced repetition, we hypothesize that random sampling leads to high training costs, lower-quality models, and significant data forgetting. To address these inefficiencies, we propose the Learn-Focus-Review (LFR) paradigm -- a dynamic training approach that adapts to the model's learning progress. LFR tracks the model's learning performance across data blocks (sequences of tokens) and prioritizes revisiting challenging regions of the dataset that are more prone to being forgotten, enabling better retention and more efficient learning. Using the LFR paradigm, we pretrained Llama and GPT models on the SlimPajama and OpenWebText datasets, respectively. These models were evaluated on downstream tasks across various domains, including question answering, problem-solving, commonsense reasoning, language modeling, and translation. Compared to baseline models trained on the full datasets, LFR consistently achieved lower perplexity and higher accuracy, while using only 5%--19% of the training tokens. Furthermore, LFR matched the performance of industry-standard Pythia models with up to 2$ imes$ the parameter count, using just 3.2% of the training tokens, demonstrating its effectiveness and efficiency.
Problem

Research questions and friction points this paper is trying to address.

Large Language Models
Pre-training Efficiency
Knowledge Retention
Innovation

Methods, ideas, or system contributions that make the work stand out.

LFR method
Pre-training efficiency
Selective data review
🔎 Similar Papers
No similar papers found.