🤖 AI Summary
Conventional K-fold cross-validation relies on heuristic choices of K (e.g., 5 or 10), leading to suboptimal bias–variance trade-offs in model evaluation. Method: We propose a data- and model-adaptive framework for selecting the optimal K. First, we derive a theoretical upper bound on the estimation uncertainty of cross-validation under finite samples. Then, we formulate a utility-driven optimization objective that explicitly models K-selection as a bias–variance trade-off. Contribution/Results: Empirical validation on real-world datasets—using linear regression and random forests—demonstrates that the optimal K strongly depends on sample size, signal-to-noise ratio, and model complexity; fixed-K conventions thus rest on unwarranted assumptions. Our framework enhances the reliability and interpretability of model evaluation and provides a principled foundation for robust model comparison.
📝 Abstract
Cross-validation is a standard technique used across science to test how well a model predicts new data. Data are split into $K$ "folds," where one fold (i.e., hold-out set) is used to evaluate a model's predictive ability. Researchers typically rely on conventions when choosing $K$, commonly $K=5$, or $80{:}20$ split, even though the choice of $K$ can affect inference and model evaluation. Principally, this $K$ should be determined by balancing the predictive accuracy (bias) and the uncertainty of this accuracy (variance), which forms a tradeoff based on the size of the hold-out set. More training data means more accurate models, but fewer testing data lead to uncertain evaluation, and vice versa. The challenge is that this evaluation uncertainty cannot be estimated directly from data. We propose a procedure to determine the optimal $K$ by deriving a finite-sample upper bound on the evaluation uncertainty and adopting a utility-based approach to make this tradeoff explicit. Analyses of real-world datasets using linear regression and random forest demonstrate this procedure in practice, providing insight into implicit assumptions, robustness, and model performance. Critically, the results show that the optimal $K$ depends on both the data and the model, and that conventional choices implicitly make assumptions about the fundamental characteristics of the data. Our framework makes these assumptions explicit and provides a principled, transparent way to select $K$ based on the data and model rather than convention. By replacing a one-size-fits-all choice with context-specific reasoning, it enables more reliable comparisons of predictive performance across scientific domains.