Lost in Retraining: Roaming the Parameter Space of Exponential Families Under Closed-Loop Learning

📅 2025-06-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates the parameter dynamics of exponential-family models under closed-loop learning—i.e., iterative retraining on self-generated data. We show that, under maximum likelihood estimation, sufficient statistics form a martingale, causing parameters to converge to absorbing states that固化 and amplify initial data biases; this drift exhibits reparameterization non-invariance, exposing an intrinsic instability in closed-loop learning. We derive an exact parameter evolution equation and propose three mitigation strategies: injecting a vanishing amount of fixed real data, adopting maximum a posteriori (MAP) estimation, or imposing parameter regularization. Experiments and theory jointly demonstrate that even negligible external supervision substantially curbs bias accumulation, and that the system’s asymptotic behavior fundamentally depends on the choice of parameterization. This work provides the first rigorous dynamical characterization of self-reinforcing bias in generative AI and establishes a principled, actionable intervention framework.

Technology Category

Application Category

📝 Abstract
Closed-loop learning is the process of repeatedly estimating a model from data generated from the model itself. It is receiving great attention due to the possibility that large neural network models may, in the future, be primarily trained with data generated by artificial neural networks themselves. We study this process for models that belong to exponential families, deriving equations of motions that govern the dynamics of the parameters. We show that maximum likelihood estimation of the parameters endows sufficient statistics with the martingale property and that as a result the process converges to absorbing states that amplify initial biases present in the data. However, we show that this outcome may be prevented by polluting the data with an infinitesimal fraction of data points generated from a fixed model, by relying on maximum a posteriori estimation or by introducing regularisation. Furthermore, we show that the asymptotic behavior of the dynamics is not reparametrisation invariant.
Problem

Research questions and friction points this paper is trying to address.

Studying closed-loop learning dynamics in exponential families models
Analyzing convergence to biased absorbing states in retraining
Exploring methods to prevent bias amplification in closed-loop learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Closed-loop learning with exponential families dynamics
Prevent bias via data pollution or regularization
Maximum a posteriori estimation stabilizes convergence