The Lie of the Average: How Class Incremental Learning Evaluation Deceives You?

๐Ÿ“… 2025-09-26
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
In class-incremental learning (CIL) evaluation, mainstream protocols estimate performance via mean and variance over only a few randomly sampled class ordersโ€”leading to biased mean estimates and severely underestimated variance, thus failing to reflect true generalization capability. This work argues that reliable assessment necessitates comprehensive characterization of the performance distribution. To this end, we propose EDGE, an adaptive evaluation protocol based on extremal sequences. First, we theoretically establish a negative correlation between inter-task similarity and model performance, enabling similarity-driven search for extremal (best/worst-case) class orders. Second, we integrate theoretical analysis with dynamic sampling to construct a robust performance distribution estimator. Experiments demonstrate that EDGE substantially reduces estimation bias in both mean and variance, accurately captures performance bounds, and provides reproducible, high-confidence evaluation for model selection and robustness validation.

Technology Category

Application Category

๐Ÿ“ Abstract
Class Incremental Learning (CIL) requires models to continuously learn new classes without forgetting previously learned ones, while maintaining stable performance across all possible class sequences. In real-world settings, the order in which classes arrive is diverse and unpredictable, and model performance can vary substantially across different sequences. Yet mainstream evaluation protocols calculate mean and variance from only a small set of randomly sampled sequences. Our theoretical analysis and empirical results demonstrate that this sampling strategy fails to capture the full performance range, resulting in biased mean estimates and a severe underestimation of the true variance in the performance distribution. We therefore contend that a robust CIL evaluation protocol should accurately characterize and estimate the entire performance distribution. To this end, we introduce the concept of extreme sequences and provide theoretical justification for their crucial role in the reliable evaluation of CIL. Moreover, we observe a consistent positive correlation between inter-task similarity and model performance, a relation that can be leveraged to guide the search for extreme sequences. Building on these insights, we propose EDGE (Extreme case-based Distribution and Generalization Evaluation), an evaluation protocol that adaptively identifies and samples extreme class sequences using inter-task similarity, offering a closer approximation of the ground-truth performance distribution. Extensive experiments demonstrate that EDGE effectively captures performance extremes and yields more accurate estimates of distributional boundaries, providing actionable insights for model selection and robustness checking. Our code is available at https://github.com/AIGNLAI/EDGE.
Problem

Research questions and friction points this paper is trying to address.

Current CIL evaluation underestimates true performance variance across sequences
Robust evaluation must characterize the full performance distribution accurately
Proposed EDGE protocol identifies extreme sequences using inter-task similarity
Innovation

Methods, ideas, or system contributions that make the work stand out.

Evaluates class sequences using inter-task similarity
Adaptively identifies extreme sequences for testing
Approximates true performance distribution more accurately
๐Ÿ”Ž Similar Papers
No similar papers found.