🤖 AI Summary
This paper identifies the inherent mechanism by which autoregressive language models (LMs) inevitably collapse when iteratively trained on corpora containing generated text. Specifically, even an arbitrarily small positive proportion of model-generated content in the initial training corpus leads to irreversible performance degradation—eventually converging to the level of a randomly initialized model after multiple training generations.
Method: The authors establish the first rigorous mathematical proof of this phenomenon, overcoming prior reliance on empirical observation alone. They develop a theoretical framework grounded in probabilistic modeling, information-theoretic entropy analysis, and recursive distribution shift theory, and validate it via controlled synthetic-data experiments.
Results: Post-collapse models exhibit no statistically significant performance difference from untrained baselines across standard benchmarks, thereby establishing a fundamental theoretical lower bound on capability degradation induced by generative data contamination.
📝 Abstract
Auto-regressive language models (LMs) have been widely used to generate text on the World Wide Web. The generated text is often collected into the training corpus of the next generations of LMs. Previous work experimentally found that LMs collapse when trained on recursively generated text. This paper presents theoretical proof that once a corpus (such as the World Wide Web) begins to incorporate generated text, and the training text of each LM is sampled from this corpus, then no matter how small the amount of text generated by each LM that enters the corpus is, after a sufficient amount of time, LM collapse is bound to occur. Our proof is validated by a series of experiments showing that the collapsed LMs perform no better than an untrained LM with randomly initialized parameters. By proving the existence of LM collapse, we express our concerns about the current situation in which an increasing amount of generated text may be used in LM training. The source code is available in the online data warehouse: https://github.com/wanglc02/generated-data