Behavior of prediction performance metrics with rare events

📅 2025-04-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Common binary classification metrics—AUC, sensitivity, specificity, positive predictive value (PPV), and accuracy—exhibit unstable behavior under rare-event settings, yet their statistical properties and underlying bias-variance trade-offs remain poorly understood. Method: We conduct large-scale simulations, cross-validation, and empirical bias-variance decomposition to systematically analyze metric stability as a function of event prevalence, minority-class sample size, and majority-class sample size. Contribution/Results: We establish, for the first time, that AUC stability depends on minority-class sample size—not event rate—whereas sensitivity is governed by event count, specificity by non-event count, and PPV and accuracy remain persistently sensitive to event rate. These findings clarify a longstanding misconception: event *count*, not event *rate*, is the primary driver of metric variability for most measures. Consequently, AUC remains reliable even under rarity when the absolute number of events is moderate. Our work provides theoretical grounding and practical guidance for metric selection and interpretation in rare-event modeling.

Technology Category

Application Category

📝 Abstract
Area under the receiving operator characteristic curve (AUC) is commonly reported alongside binary prediction models. However, there are concerns that AUC might be a misleading measure of prediction performance in the rare event setting. This setting is common since many events of clinical importance are rare events. We conducted a simulation study to determine when or whether AUC is unstable in the rare event setting. Specifically, we aimed to determine whether the bias and variance of AUC are driven by the number of events or the event rate. We also investigated the behavior of other commonly used measures of prediction performance, including positive predictive value, accuracy, sensitivity, and specificity. Our results indicate that poor AUC behavior -- as measured by empirical bias, variability of cross-validated AUC estimates, and empirical coverage of confidence intervals -- is driven by the minimum class size, not event rate. Performance of sensitivity is driven by the number of events, while that of specificity is driven by the number of non-events. Other measures, including positive predictive value and accuracy, depend on the event rate even in large samples. AUC is reliable in the rare event setting provided that the total number of events is moderately large.
Problem

Research questions and friction points this paper is trying to address.

Assess AUC reliability in rare event prediction models
Determine if AUC bias depends on event count or rate
Compare performance metrics like sensitivity and specificity
Innovation

Methods, ideas, or system contributions that make the work stand out.

Simulation study on AUC stability in rare events
Analyzed bias and variance of AUC metrics
Identified key drivers for prediction performance measures
🔎 Similar Papers
No similar papers found.
E
Emily Minus
Department of Biostatistics, University of Washington
R
R. Y. Coley
Biostatistics Division, Kaiser Permanente Washington Health Research Institute; Department of Biostatistics, University of Washington
Susan Shortreed
Susan Shortreed
Investigator, Biostatistics Unit, Kaiser Permanente Washington Health Research Institute
variable selectionclinical prediction modelscausal inferenceadaptive interventions
B
Brian D. Williamson
Biostatistics Division, Kaiser Permanente Washington Health Research Institute; Vaccine and Infectious Disease Division, Fred Hutchinson Cancer Center; Department of Biostatistics, University of Washington