🤖 AI Summary
Current continual learning (CL) evaluation protocols suffer from critical flaws—hyperparameter tuning and evaluation are conducted within the same scenario, leading to systematic overestimation of CL capability and employing unrealistic, non-deployable tuning practices.
Method: We propose the Generalized Two-stage Evaluation Protocol (GTEP), which strictly decouples hyperparameter optimization (performed solely on a source dataset) from performance evaluation (conducted on a target dataset), thereby enforcing cross-dataset generalization under structurally identical tasks.
Contribution/Results: Extensive experiments—over 8,000 runs across CIFAR and ImageNet variants—under both pre-trained and non-pre-trained settings within a class-incremental learning framework demonstrate that mainstream SOTA methods suffer 30–50% average performance degradation under GTEP. This reveals their lack of robustness across deployment scenarios and establishes GTEP as a more rigorous, realistic benchmark for trustworthy continual learning.
📝 Abstract
Continual learning (CL) aims to train a model on a sequence of tasks (i.e., a CL scenario) while balancing the trade-off between plasticity (learning new tasks) and stability (retaining prior knowledge). The dominantly adopted conventional evaluation protocol for CL algorithms selects the best hyperparameters (e.g., learning rate, mini-batch size, regularization strengths, etc.) within a given scenario and then evaluates the algorithms using these hyperparameters in the same scenario. However, this protocol has significant shortcomings: it overestimates the CL capacity of algorithms and relies on unrealistic hyperparameter tuning, which is not feasible for real-world applications. From the fundamental principles of evaluation in machine learning, we argue that the evaluation of CL algorithms should focus on assessing the generalizability of their CL capacity to unseen scenarios. Based on this, we propose the Generalizable Two-phase Evaluation Protocol (GTEP) consisting of hyperparameter tuning and evaluation phases. Both phases share the same scenario configuration (e.g., number of tasks) but are generated from different datasets. Hyperparameters of CL algorithms are tuned in the first phase and applied in the second phase to evaluate the algorithms. We apply this protocol to class-incremental learning, both with and without pretrained models. Across more than 8,000 experiments, our results show that most state-of-the-art algorithms fail to replicate their reported performance, highlighting that their CL capacity has been significantly overestimated in the conventional evaluation protocol. Our implementation can be found in https://github.com/csm9493/GTEP.