🤖 AI Summary
Common binary classification metrics—AUC, sensitivity, specificity, positive predictive value (PPV), and accuracy—exhibit unstable behavior under rare-event settings, yet their statistical properties and underlying bias-variance trade-offs remain poorly understood.
Method: We conduct large-scale simulations, cross-validation, and empirical bias-variance decomposition to systematically analyze metric stability as a function of event prevalence, minority-class sample size, and majority-class sample size.
Contribution/Results: We establish, for the first time, that AUC stability depends on minority-class sample size—not event rate—whereas sensitivity is governed by event count, specificity by non-event count, and PPV and accuracy remain persistently sensitive to event rate. These findings clarify a longstanding misconception: event *count*, not event *rate*, is the primary driver of metric variability for most measures. Consequently, AUC remains reliable even under rarity when the absolute number of events is moderate. Our work provides theoretical grounding and practical guidance for metric selection and interpretation in rare-event modeling.
📝 Abstract
Area under the receiving operator characteristic curve (AUC) is commonly reported alongside binary prediction models. However, there are concerns that AUC might be a misleading measure of prediction performance in the rare event setting. This setting is common since many events of clinical importance are rare events. We conducted a simulation study to determine when or whether AUC is unstable in the rare event setting. Specifically, we aimed to determine whether the bias and variance of AUC are driven by the number of events or the event rate. We also investigated the behavior of other commonly used measures of prediction performance, including positive predictive value, accuracy, sensitivity, and specificity. Our results indicate that poor AUC behavior -- as measured by empirical bias, variability of cross-validated AUC estimates, and empirical coverage of confidence intervals -- is driven by the minimum class size, not event rate. Performance of sensitivity is driven by the number of events, while that of specificity is driven by the number of non-events. Other measures, including positive predictive value and accuracy, depend on the event rate even in large samples. AUC is reliable in the rare event setting provided that the total number of events is moderately large.