🤖 AI Summary
Recursive training of large language models (LLMs) suffers from model collapse—a progressive degradation in performance—caused by the inadvertent inclusion of synthetic data in training corpora. Method: This paper proposes an importance-reweighting-based data resampling approach guided by a machine-generated text detector, enabling dynamic purification of the pretraining distribution without prior knowledge of data provenance. Crucially, the method couples detection signals with importance weighting to adaptively upweight human-authored texts and implicitly downweight synthetic ones, thereby interrupting the collapse cycle at its source. Contribution/Results: Experiments on GPT-2 and SmolLM2 under recursive training demonstrate significant mitigation of performance degradation. When the training corpus contains sufficient human-written text, the method yields consistent improvements on open-ended generation tasks, validating its effectiveness and practical utility for robust LLM self-improvement.
📝 Abstract
As Large Language Models (LLMs) become increasingly prevalent, their generated outputs are proliferating across the web, risking a future where machine-generated content dilutes human-authored text. Since web data is the primary resource for LLM pretraining, future models will be trained on an unknown portion of synthetic data. This will lead to model collapse, a degenerative process which causes models to reinforce their own errors and experience a drop in model performance. In this study, we investigate the impact of decoding strategy on model collapse, where we analyse the characteristics of the generated data during recursive training, its similarity to human references and the resulting model performance. Using the decoding strategies that lead to the most significant model degradation, we tackle the question: how to avoid model collapse when the origin (human or synthetic) of the training data is unknown. We design a novel methodology based on resampling the data distribution using importance weights from our machine-generated text detector. Our method is validated on two LLM variants (GPT-2 and SmolLM2) on the open-ended text generation task, demonstrating that we can successfully prevent model collapse and when there is enough human-authored data in the training dataset, our method improves model performance.