Conditional Forecasts and Proper Scoring Rules for Reliable and Accurate Performative Predictions

📅 2025-10-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses performative prediction—where predictions themselves shift the data distribution they aim to forecast—causing conventional proper scoring rules to fail and predictions to become unreliable. First, it establishes an impossibility theorem proving that standard scoring rules cannot incentivize truthful reporting under performative dynamics. To overcome this, the paper proposes two novel solutions: (1) a separability-based incentive-compatible mechanism ensuring honest prediction reporting, and (2) a new scoring rule combining covariate conditioning with unbiased divergence estimation, yielding stable and identifiable predictive distributions. Leveraging decision-theoretic analysis and an extended proper scoring framework, the approach enables robust parameter estimation for performative systems. It resolves the core challenge posed by Perdomo et al. (2020), substantially enhancing the reliability and trustworthiness of predictive models in performative settings.

Technology Category

Application Category

📝 Abstract
Performative predictions are forecasts which influence the outcomes they aim to predict, undermining the existence of correct forecasts and standard methods of elicitation and estimation. We show that conditioning forecasts on covariates that separate them from the outcome renders the target distribution forecast-invariant, guaranteeing well-posedness of the forecasting problem. However, even under this condition, classical proper scoring rules fail to elicit correct forecasts. We prove a general impossibility result and identify two solutions: (i) in decision-theoretic settings, elicitation of correct and incentive-compatible forecasts is possible if forecasts are separating; (ii) scoring with unbiased estimates of the divergence between the forecast and the induced distribution of the target variable yields correct forecasts. Applying these insights to parameter estimation, conditional forecasts and proper scoring rules enable performatively stable estimation of performatively correct parameters, resolving the issues raised by Perdomo et al. (2020). Our results expose fundamental limits of classical forecast evaluation and offer new tools for reliable and accurate forecasting in performative settings.
Problem

Research questions and friction points this paper is trying to address.

Ensuring forecast invariance through covariate conditioning
Overcoming proper scoring rules' failure in performative predictions
Enabling performatively stable parameter estimation via new methods
Innovation

Methods, ideas, or system contributions that make the work stand out.

Conditional forecasts on covariates for forecast-invariant targets
Separating forecasts enable incentive-compatible elicitation
Scoring with divergence estimates yields correct forecasts
🔎 Similar Papers
No similar papers found.