🤖 AI Summary
This work addresses the failure of empirical risk minimization (ERM) to converge to the true concept when training data are contaminated with indistinguishable synthetic samples generated by large language models. To tackle this contamination setting, the authors propose a robust learning framework based on non-uniform weighting, integrating tools from statistical learning theory, VC dimension analysis, and distributionally robust optimization. The method substantially outperforms ERM in mean estimation tasks and, for the first time, establishes that for any VC class and arbitrary contamination proportion, there exists a learnable algorithm capable of overcoming ERM’s failure and achieving PAC learnability—thereby circumventing the model collapse phenomenon.
📝 Abstract
The prevalence and low cost of LLMs have led to a rise of synthetic content. From review sites to court documents,"natural"content has been contaminated by data points that appear similar to natural data, but are in fact LLM-generated. In this work we revisit fundamental learning theory questions in this, now ubiquitous, setting. We model this scenario as a sequence of learning tasks where the input is a mix of natural and synthetic data, and the learning algorithms are oblivious to the origin of any individual example. We study the possibilities and limitations of ERM in this setting. For the problem of estimating the mean of an arbitrary $d$-dimensional distribution, we find that while ERM converges to the true mean, it is outperformed by an algorithm that assigns non-uniform weights to examples from different generations of data. For the PAC learning setting, the disparity is even more stark. We find that ERM does not always converge to the true concept, echoing the model collapse literature. However, we show there are algorithms capable of learning the correct hypothesis for arbitrary VC classes and arbitrary amounts of contamination.