Performative Risk Control: Calibrating Models for Reliable Deployment under Performativity

πŸ“… 2025-05-30
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the black-box model calibration problem under *performative prediction*β€”where model predictions influence the true outcomes. We propose the first risk-controlling calibration framework with finite-sample statistical guarantees. Methodologically, we integrate rigorous statistical risk control into the performative learning paradigm, designing an iterative refinement calibration algorithm whose risk convergence and predictive performance improvement are theoretically established. The framework supports multiple risk measures and Hoeffding-type tail bounds. Empirically, on credit default prediction, it significantly improves risk coverage and calibration accuracy while maintaining robustness and controllability under distribution shift. Our approach thus provides both verifiable theoretical guarantees and practical tools for trustworthy decision-making in dynamic environments.

Technology Category

Application Category

πŸ“ Abstract
Calibrating blackbox machine learning models to achieve risk control is crucial to ensure reliable decision-making. A rich line of literature has been studying how to calibrate a model so that its predictions satisfy explicit finite-sample statistical guarantees under a fixed, static, and unknown data-generating distribution. However, prediction-supported decisions may influence the outcome they aim to predict, a phenomenon named performativity of predictions, which is commonly seen in social science and economics. In this paper, we introduce Performative Risk Control, a framework to calibrate models to achieve risk control under performativity with provable theoretical guarantees. Specifically, we provide an iteratively refined calibration process, where we ensure the predictions are improved and risk-controlled throughout the process. We also study different types of risk measures and choices of tail bounds. Lastly, we demonstrate the effectiveness of our framework by numerical experiments on the task of predicting credit default risk. To the best of our knowledge, this work is the first one to study statistically rigorous risk control under performativity, which will serve as an important safeguard against a wide range of strategic manipulation in decision-making processes.
Problem

Research questions and friction points this paper is trying to address.

Calibrating models for reliable risk control under performativity
Ensuring predictions improve and remain risk-controlled iteratively
Addressing strategic manipulation in decision-making with statistical guarantees
Innovation

Methods, ideas, or system contributions that make the work stand out.

Iterative calibration for risk control
Theoretical guarantees under performativity
Multiple risk measures and bounds
πŸ”Ž Similar Papers
Victor Li
Victor Li
New York University
B
Baiting Chen
University of California, Los Angeles
Y
Yuzhen Mao
Simon Fraser University
Q
Qi Lei
New York University
Zhun Deng
Zhun Deng
Assistant Professor, Computer Science, UNC Chapel Hill
machine learningoptimizationstatisticstheoretical computer science