Register Always Matters: Analysis of LLM Pretraining Data Through the Lens of Language Variation

📅 2025-04-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates how domain characteristics of pretraining data—such as news, opinion, and how-to texts—affect large language model (LLM) performance. We introduce linguistic register theory into LLM data curation for the first time, applying a corpus-linguistic annotation scheme to perform fine-grained register classification. Using multi-register subsets for controlled training and cross-benchmark evaluation (e.g., MMLU, BIG-bench), we conduct ablation studies and capability attribution analyses. Results show that opinion texts significantly enhance reasoning and factual consistency, whereas news texts degrade performance; models trained on high-yield registers—how-to, opinion, and informational description—outperform full-dataset baselines. Our core contributions are: (1) the first register-aware classification and attribution framework tailored for LLMs; (2) empirical evidence of heterogeneous register effects on distinct model capabilities; and (3) theoretically grounded, data-driven principles and practical guidelines for efficient pretraining.

Technology Category

Application Category

📝 Abstract
Pretraining data curation is a cornerstone in Large Language Model (LLM) development, leading to growing research on quality filtering of large web corpora. From statistical quality flags to LLM-based labeling systems, datasets are divided into categories, frequently reducing to a binary: those passing the filters deemed as valuable examples, others discarded as useless or detrimental. However, a more detailed understanding of the contribution of different kinds of texts to model performance is still largely lacking. In this article, we present the first study utilizing registers (also known as genres) - a widely used standard in corpus linguistics to model linguistic variation - to curate pretraining datasets and investigate the effect of register on the performance of LLMs. We perform comparative studies by training models with register classified data and evaluating them using standard benchmarks, and show that the register of pretraining data substantially affects model performance. We uncover surprising relationships between the pretraining material and the resulting models: using the News register results in subpar performance, and on the contrary, including the Opinion class, covering texts such as reviews and opinion blogs, is highly beneficial. While a model trained on the entire unfiltered dataset outperforms those trained on datasets limited to a single register, combining well-performing registers like How-to-Instructions, Informational Description, and Opinion leads to major improvements. Furthermore, analysis of individual benchmark results reveals key differences in the strengths and drawbacks of specific register classes as pretraining data. These findings show that register is an important explainer of model variation and can facilitate more deliberate future data selection practices.
Problem

Research questions and friction points this paper is trying to address.

Analyzes how language registers affect LLM pretraining data quality
Investigates register impact on model performance using classified data
Identifies optimal register combinations for improved LLM outcomes
Innovation

Methods, ideas, or system contributions that make the work stand out.

Utilizes register classification for data curation
Trains models with register classified data
Combines high-performing registers for improvements
A
Amanda Myntti
University of Turku
Erik Henriksson
Erik Henriksson
Postdoctoral researcher, University of Turku
V
Veronika Laippala
University of Turku
Sampo Pyysalo
Sampo Pyysalo
University of Turku