Optimal Regularization for Performative Learning

📅 2025-10-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the design of regularization in performative learning, where model deployment induces dynamic shifts in the data distribution—rendering classical supervised learning ineffective. We propose a prescriptive regularization strategy within a high-dimensional ridge regression framework. Theoretically, we establish a scaling law linking the optimal regularization strength to the magnitude of performative influence. Crucially, we demonstrate that, under overparameterization, performative effects can *improve* generalization—challenging the conventional intuition that stronger intervention necessitates stronger regularization. To analyze this phenomenon, we develop novel analytical tools grounded in random matrix theory and high-dimensional statistics. Extensive experiments on both synthetic and real-world datasets validate that our adaptive regularization significantly reduces test risk compared to standard approaches.

Technology Category

Application Category

📝 Abstract
In performative learning, the data distribution reacts to the deployed model - for example, because strategic users adapt their features to game it - which creates a more complex dynamic than in classical supervised learning. One should thus not only optimize the model for the current data but also take into account that the model might steer the distribution in a new direction, without knowing the exact nature of the potential shift. We explore how regularization can help cope with performative effects by studying its impact in high-dimensional ridge regression. We show that, while performative effects worsen the test risk in the population setting, they can be beneficial in the over-parameterized regime where the number of features exceeds the number of samples. We show that the optimal regularization scales with the overall strength of the performative effect, making it possible to set the regularization in anticipation of this effect. We illustrate this finding through empirical evaluations of the optimal regularization parameter on both synthetic and real-world datasets.
Problem

Research questions and friction points this paper is trying to address.

Addresses performative learning where data distribution reacts to deployed models
Explores regularization's role in managing performative effects in ridge regression
Determines optimal regularization scaling with performative effect strength
Innovation

Methods, ideas, or system contributions that make the work stand out.

Regularization addresses performative effects in learning
Optimal regularization scales with performative effect strength
Performative effects beneficial in over-parameterized regime
🔎 Similar Papers
No similar papers found.