🤖 AI Summary
This paper investigates the convergence and stability of self-consuming generative models under heterogeneous human preferences. Addressing the dynamic evolution induced by multi-round retraining on “real–generated” data mixtures, we develop a novel analytical framework integrating nonlinear Perron–Frobenius theory with dynamical systems methods. Going beyond the classical Banach contraction mapping paradigm, we establish rigorous convergence guarantees under non-standard assumptions. We systematically characterize asymptotic behaviors across four distinct preference regimes and derive universal convergence conditions and stability criteria. Our results reveal how preference heterogeneity fundamentally governs stability versus instability mechanisms in self-consuming learning dynamics. Moreover, they substantially broaden the theoretical applicability of self-consuming models, providing critical foundations for controllable and robust deployment of generative AI systems.
📝 Abstract
Self-consuming generative models have received significant attention over the last few years. In this paper, we study a self-consuming generative model with heterogeneous preferences that is a generalization of the model in Ferbach et al. (2024). The model is retrained round by round using real data and its previous-round synthetic outputs. The asymptotic behavior of the retraining dynamics is investigated across four regimes using different techniques including the nonlinear Perron--Frobenius theory. Our analyses improve upon that of Ferbach et al. (2024) and provide convergence results in settings where the well-known Banach contraction mapping arguments do not apply. Stability and non-stability results regarding the retraining dynamics are also given.