🤖 AI Summary
This work addresses the inefficiency of conventional language models in acquiring structured reasoning capabilities during pretraining on large-scale natural corpora. To decouple knowledge acquisition from reasoning, the authors propose a pre-warming strategy that leverages an extremely small amount (0.1%) of formal procedural data—such as Dyck sequences—prior to standard pretraining. This approach significantly enhances the model’s ability to internalize structured knowledge, yielding substantial improvements in transfer performance across benchmarks including C4, CodeParrot, and DeepMind-Math. Notably, context recall accuracy increases from 10% to 98%, and achieving equivalent loss levels requires only 55%–86% of the original data volume. Further analysis reveals that attention layers predominantly govern structured reasoning, while MLP layers specialize in language modeling, offering a novel paradigm for efficient pretraining.
📝 Abstract
Pretraining directly on web-scale corpora is the de facto paradigm for building language models. We study an alternative setting where the model is initially exposed to abstract structured data, as a means to ease the subsequent acquisition of rich semantic knowledge, much like humans learn simple logic and mathematics before higher reasoning. We specifically focus on procedural data, generated by formal languages and other simple algorithms, as such abstract data. We first diagnose the algorithmic skills that different forms of procedural data can improve, often significantly. For example, on context recall (Needle-in-a-haystack), the accuracy jumps from 10 to 98% when pretraining on Dyck sequences (balanced brackets). Second, we study how these gains are reflected in pretraining larger models (up to 1.3B). We find that front-loading as little as 0.1% procedural data significantly outperforms standard pretraining on natural language, code, and informal mathematics (C4, CodeParrot, and DeepMind-Math datasets). Notably, this procedural pretraining enables the models to reach the same loss value with only 55, 67, 86% of the original data. Third, we explore the mechanisms behind and find that procedural pretraining instils non-trivial structure in both attention and MLP layers. The former is particularly important for structured domains (e.g. code), and the latter for language. Finally, we lay a path for combining multiple forms of procedural data. Our results show that procedural pretraining is a simple, lightweight means to improving performance and accelerating language model pretraining, ultimately suggesting the promise of disentangling knowledge acquisition from reasoning in LLMs.