🤖 AI Summary
Learning curves are conventionally assumed to be monotonic and convex, yet pathological behaviors—such as non-monotonicity and non-convexity—are frequently observed in practice, undermining curve fitting and model selection reliability.
Method: We construct LCDB 1.1, a high-resolution learning curve database, and introduce a statistically rigorous framework—including piecewise linear trend testing—to systematically quantify and localize pathologies across diverse learners and datasets.
Contribution/Results: Our analysis reveals that 14% of learning curves exhibit pathology—approximately double prior estimates—demonstrating that such behaviors are substantially more prevalent than previously recognized. We identify specific learners prone to pathology and show that feature scaling offers limited mitigation. Experiments across multiple models and datasets confirm that pathological curves significantly degrade fitting accuracy and model selection consistency. LCDB 1.1 establishes a critical benchmark for evaluating algorithmic robustness and theoretical modeling capacity, challenging and extending foundational assumptions in learning curve theory.
📝 Abstract
Sample-wise learning curves plot performance versus training set size. They are useful for studying scaling laws and speeding up hyperparameter tuning and model selection. Learning curves are often assumed to be well-behaved: monotone (i.e. improving with more data) and convex. By constructing the Learning Curves Database 1.1 (LCDB 1.1), a large-scale database with high-resolution learning curves, we show that learning curves are less often well-behaved than previously thought. Using statistically rigorous methods, we observe significant ill-behavior in approximately 14% of the learning curves, almost twice as much as in previous estimates. We also identify which learners are to blame and show that specific learners are more ill-behaved than others. Additionally, we demonstrate that different feature scalings rarely resolve ill-behavior. We evaluate the impact of ill-behavior on downstream tasks, such as learning curve fitting and model selection, and find it poses significant challenges, underscoring the relevance and potential of LCDB 1.1 as a challenging benchmark for future research.