π€ AI Summary
This work addresses the risk of data contamination and subsequent model collapse in recursive training of generative AI, where synthetic data generated by earlier model iterations are inadvertently mixed with real data. The authors propose a general analytical framework that requires no strong assumptions about the true data distribution, offering the first convergence guarantees for recursive training under fully nonparametric conditions. The framework is further extended to settings with sampling bias. By modeling the generative process through universal function approximators and leveraging nonparametric theory alongside minimax convergence rate analysis, they establish that the overall convergence rate is governed by the slower of two factors: the baseline modelβs convergence rate and the proportion of real data incorporated at each training round. These theoretical findings are empirically validated through experiments.
π Abstract
Generative Artificial Intelligence (AI), such as large language models (LLMs), has become a transformative force across science, industry, and society. As these systems grow in popularity, web data becomes increasingly interwoven with this AI-generated material and it is increasingly difficult to separate them from naturally generated content. As generative models are updated regularly, later models will inevitably be trained on mixtures of human-generated data and AI-generated data from earlier versions, creating a recursive training process with data contamination. Existing theoretical work has examined only highly simplified settings, where both the real data and the generative model are discrete or Gaussian, where it has been shown that such recursive training leads to model collapse. However, real data distributions are far more complex, and modern generative models are far more flexible than Gaussian and linear mechanisms. To fill this gap, we study recursive training in a general framework with minimal assumptions on the real data distribution and allow the underlying generative model to be a general universal approximator. In this framework, we show that contaminated recursive training still converges, with a convergence rate equal to the minimum of the baseline model's convergence rate and the fraction of real data used in each iteration. To the best of our knowledge, this is the first (positive) theoretical result on recursive training without distributional assumptions on the data. We further extend the analysis to settings where sampling bias is present in data collection and support all theoretical results with empirical studies.