BeyondWeb: Lessons from Scaling Synthetic Data for Trillion-scale Pretraining

πŸ“… 2025-08-14
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
To address the performance saturation in large language model (LLM) pretraining caused by data bottlenecks, this work proposes BeyondWebβ€”a framework that systematically investigates the joint impact of model scale, architecture family, and data rewriting strategies on synthetic data quality. It establishes a high-quality synthetic data generation paradigm integrating model feedback, rewrite filtering, and diversity control. Evaluated on trillion-token pretraining, BeyondWeb significantly enhances semantic richness and training efficiency of synthetic data. Experiments show it achieves +5.1 and +2.6 percentage points average accuracy over Cosmopedia and Nemotron-Synth across 14 benchmarks; accelerates training up to 7.7Γ—; and enables a 3B model trained on 180B tokens to outperform an 8B baseline. This is the first work to realize a multi-factor co-optimized synthetic data generation system, providing a reproducible and scalable pathway to overcome pretraining data constraints.

Technology Category

Application Category

πŸ“ Abstract
Recent advances in large language model (LLM) pretraining have shown that simply scaling data quantity eventually leads to diminishing returns, hitting a data wall. In response, the use of synthetic data for pretraining has emerged as a promising paradigm for pushing the frontier of performance. Despite this, the factors affecting synthetic data quality remain poorly understood. In this work, we introduce BeyondWeb, a synthetic data generation framework that produces high-quality synthetic data for pretraining. BeyondWeb significantly extends the capabilities of traditional web-scale datasets, outperforming state-of-the-art synthetic pretraining datasets such as Cosmopedia and Nemotron-CC's high-quality synthetic subset (Nemotron-Synth) by up to 5.1 percentage points (pp) and 2.6pp, respectively, when averaged across a suite of 14 benchmark evaluations. It delivers up to 7.7x faster training than open web data and 2.7x faster than Nemotron-Synth. Remarkably, a 3B model trained for 180B tokens on BeyondWeb outperforms an 8B model trained for the same token budget on Cosmopedia. We also present several insights from BeyondWeb on synthetic data for pretraining: what drives its benefits, which data to rephrase and how, and the impact of model size and family on data quality. Overall, our work shows that there's no silver bullet for generating high-quality synthetic pretraining data. The best outcomes require jointly optimizing many factors, a challenging task that requires rigorous science and practical expertise. Naive approaches can yield modest improvements, potentially at great cost, while well-executed methods can yield transformative improvements, as exemplified by BeyondWeb.
Problem

Research questions and friction points this paper is trying to address.

Addressing diminishing returns from scaling data quantity in LLM pretraining
Understanding factors affecting synthetic data quality for pretraining
Optimizing synthetic data generation to outperform web-scale datasets
Innovation

Methods, ideas, or system contributions that make the work stand out.

Synthetic data generation framework for pretraining
Outperforms state-of-the-art datasets significantly
Requires joint optimization of multiple factors
πŸ”Ž Similar Papers
No similar papers found.
Pratyush Maini
Pratyush Maini
Carnegie Mellon University
Trustworthy ML
V
Vineeth Dorna
DatologyAI
Parth Doshi
Parth Doshi
MS in CSE, University of California San Diego
Machine LearningComputer Vision
A
Aldo Carranza
DatologyAI
F
Fan Pan
DatologyAI
Jack Urbanek
Jack Urbanek
DatologyAI
Artificial Intelligence
P
Paul Burstein
DatologyAI
A
Alex Fang
DatologyAI
A
Alvin Deng
DatologyAI
Amro Abbas
Amro Abbas
DatologyAI
Machine LearningNatural Language ProcessingComputer Vision
B
Brett Larsen
DatologyAI
C
Cody Blakeney
DatologyAI
C
Charvi Bannur
DatologyAI
Christina Baek
Christina Baek
PhD Carnegie Mellon University
Machine Learning
D
Darren Teh
DatologyAI
D
David Schwab
DatologyAI
H
Haakon Mongstad
DatologyAI
H
Haoli Yin
DatologyAI
J
Josh Wills
DatologyAI
K
Kaleigh Mentzer
DatologyAI
L
Luke Merrick
DatologyAI
R
Ricardo Monti
DatologyAI
R
Rishabh Adiga
DatologyAI
S
Siddharth Joshi
DatologyAI
S
Spandan Das
DatologyAI