The Stability of Online Algorithms in Performative Prediction

📅 2026-02-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the instability of traditional predictive models in performative prediction settings, where deployed algorithms alter data distributions and induce feedback loops. The authors propose an unconditional reduction framework that avoids strong assumptions about how models influence data generation, leveraging randomization and martingale theory to circumvent computational barriers in existing stability analyses. By establishing a theoretical bridge between performative prediction and online learning, they prove that any no-regret algorithm converges to a mixed performative stable equilibrium—where the induced data distribution renders the model’s predictions ex post optimal. This result reveals that common algorithms such as gradient descent inherently possess stabilizing mechanisms that mitigate runaway feedback effects.

Technology Category

Application Category

📝 Abstract
The use of algorithmic predictions in decision-making leads to a feedback loop where the models we deploy actively influence the data distributions we see, and later use to retrain on. This dynamic was formalized by Perdomo et al. 2020 in their work on performative prediction. Our main result is an unconditional reduction showing that any no-regret algorithm deployed in performative settings converges to a (mixed) performatively stable equilibrium: a solution in which models actively shape data distributions in ways that their own predictions look optimal in hindsight. Prior to our work, all positive results in this area made strong restrictions on how models influenced distributions. By using a martingale argument and allowing randomization, we avoid any such assumption and sidestep recent hardness results for finding stable models. Lastly, on a more conceptual note, our connection sheds light on why common algorithms, like gradient descent, are naturally stabilizing and prevent runaway feedback loops. We hope our work enables future technical transfer of ideas between online optimization and performativity.
Problem

Research questions and friction points this paper is trying to address.

performative prediction
online algorithms
stability
feedback loop
data distribution shift
Innovation

Methods, ideas, or system contributions that make the work stand out.

performative prediction
no-regret algorithms
martingale argument
performatively stable equilibrium
online learning
🔎 Similar Papers
No similar papers found.