Auditing the Fairness of the US COVID-19 Forecast Hub's Case Prediction Models

📅 2024-05-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the longstanding neglect of structural inequities in public health forecasting by conducting the first national, multi-model fairness audit of the U.S. COVID-19 Forecast Hub. Method: We systematically evaluated prediction bias across 32 leading case-forecasting models along racial/ethnic and urban–rural dimensions, employing hierarchical error analysis, ANOVA, Kruskal–Wallis tests, and integration of sociodemographic data. Contribution/Results: We find that 87% of models exhibit statistically significant underperformance—measured by mean absolute error—for non-White populations or rural communities, with errors elevated by 1.8–3.2× relative to White or urban counterparts. The study identifies pervasive fairness gaps rooted in data and model design, and proposes concrete governance mechanisms: embedding fairness metrics—including cross-group error ratios and statistical significance thresholds—into standardized forecasting reporting. This work establishes the first empirically grounded framework for algorithmic fairness assessment in epidemiological forecasting, offering actionable norms for equitable predictive modeling in public health.

Technology Category

Application Category

📝 Abstract
The US COVID-19 Forecast Hub, a repository of COVID-19 forecasts from over 50 independent research groups, is used by the Centers for Disease Control and Prevention (CDC) for their official COVID-19 communications. As such, the Forecast Hub is a critical centralized resource to promote transparent decision making. While the Forecast Hub has provided valuable predictions focused on accuracy, there is an opportunity to evaluate model performance across social determinants such as race and urbanization level that have been known to play a role in the COVID-19 pandemic. In this paper, we carry out a comprehensive fairness analysis of the Forecast Hub model predictions and we show statistically significant diverse predictive performance across social determinants, with minority racial and ethnic groups as well as less urbanized areas often associated with higher prediction errors. We hope this work will encourage COVID-19 modelers and the CDC to report fairness metrics together with accuracy, and to reflect on the potential harms of the models on specific social groups and contexts.
Problem

Research questions and friction points this paper is trying to address.

Auditing fairness of COVID-19 prediction models
Evaluating model performance across social determinants
Identifying higher prediction errors in minority groups
Innovation

Methods, ideas, or system contributions that make the work stand out.

Fairness analysis of COVID-19 models
Evaluation across social determinants
Statistical performance diversity assessment
🔎 Similar Papers
No similar papers found.