Theoretical Proof that Generated Text in the Corpus Leads to the Collapse of Auto-regressive Language Models

📅 2024-12-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper identifies the inherent mechanism by which autoregressive language models (LMs) inevitably collapse when iteratively trained on corpora containing generated text. Specifically, even an arbitrarily small positive proportion of model-generated content in the initial training corpus leads to irreversible performance degradation—eventually converging to the level of a randomly initialized model after multiple training generations. Method: The authors establish the first rigorous mathematical proof of this phenomenon, overcoming prior reliance on empirical observation alone. They develop a theoretical framework grounded in probabilistic modeling, information-theoretic entropy analysis, and recursive distribution shift theory, and validate it via controlled synthetic-data experiments. Results: Post-collapse models exhibit no statistically significant performance difference from untrained baselines across standard benchmarks, thereby establishing a fundamental theoretical lower bound on capability degradation induced by generative data contamination.

Technology Category

Application Category

📝 Abstract
Auto-regressive language models (LMs) have been widely used to generate text on the World Wide Web. The generated text is often collected into the training corpus of the next generations of LMs. Previous work experimentally found that LMs collapse when trained on recursively generated text. This paper presents theoretical proof that once a corpus (such as the World Wide Web) begins to incorporate generated text, and the training text of each LM is sampled from this corpus, then no matter how small the amount of text generated by each LM that enters the corpus is, after a sufficient amount of time, LM collapse is bound to occur. Our proof is validated by a series of experiments showing that the collapsed LMs perform no better than an untrained LM with randomly initialized parameters. By proving the existence of LM collapse, we express our concerns about the current situation in which an increasing amount of generated text may be used in LM training. The source code is available in the online data warehouse: https://github.com/wanglc02/generated-data
Problem

Research questions and friction points this paper is trying to address.

Auto-regressive language models collapse
Recursively generated text causes collapse
Theoretical proof of LM collapse
Innovation

Methods, ideas, or system contributions that make the work stand out.

Theoretical proof of LM collapse
Generated text in corpus
Validation through experiments
🔎 Similar Papers
No similar papers found.
L
Lecheng Wang
Key Laboratory of High Confidence Software Technologies (Peking University), Ministry of Education; School of Computer Science, Peking University, Beijing, China
X
Xianjie Shi
Key Laboratory of High Confidence Software Technologies (Peking University), Ministry of Education; School of Computer Science, Peking University, Beijing, China
Ge Li
Ge Li
Full Professor of Computer Science, Peking University
Program AnalysisProgram GenerationDeep Learning
J
Jia Li
Key Laboratory of High Confidence Software Technologies (Peking University), Ministry of Education; School of Computer Science, Peking University, Beijing, China
Yihong Dong
Yihong Dong
Peking University
Code GenerationLarge Language Models
X
Xuanming Zhang
Key Laboratory of High Confidence Software Technologies (Peking University), Ministry of Education; School of Computer Science, Peking University, Beijing, China
W
Wenpin Jiao
Key Laboratory of High Confidence Software Technologies (Peking University), Ministry of Education; School of Computer Science, Peking University, Beijing, China
Hong Mei
Hong Mei
Peking University
Software EngineeringSystem SoftwareData Analytics