Class Incremental Learning for Algorithm Selection

📅 2025-06-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of dynamic solver category growth and continual model updating in streaming algorithm selection. It introduces class-incremental learning (CIL) to the algorithm selection domain for the first time and systematically investigates its applicability. To mitigate catastrophic forgetting, eight continual learning methods are comparatively evaluated on a bin-packing dataset; rehearsal-based strategies consistently outperform alternatives, exhibiting only ~7% average performance degradation, low forgetting rates, and high stability. The study demonstrates that rehearsal mechanisms—when integrated with an appropriate classification training paradigm—effectively support dynamic expansion of solver sets, preserving prior knowledge while accurately recognizing newly introduced solver categories. This work establishes the first systematic CIL-based solution and empirical benchmark for adaptive algorithm selection in streaming optimization settings.

Technology Category

Application Category

📝 Abstract
Algorithm selection is commonly used to predict the best solver from a portfolio per per-instance. In many real scenarios, instances arrive in a stream: new instances become available over time, while the number of class labels can also grow as new data distributions arrive downstream. As a result, the classification model needs to be periodically updated to reflect additional solvers without catastrophic forgetting of past data. In machine-learning (ML), this is referred to as Class Incremental Learning (CIL). While commonly addressed in ML settings, its relevance to algorithm-selection in optimisation has not been previously studied. Using a bin-packing dataset, we benchmark 8 continual learning methods with respect to their ability to withstand catastrophic forgetting. We find that rehearsal-based methods significantly outperform other CIL methods. While there is evidence of forgetting, the loss is small at around 7%. Hence, these methods appear to be a viable approach to continual learning in streaming optimisation scenarios.
Problem

Research questions and friction points this paper is trying to address.

Addressing catastrophic forgetting in algorithm selection models
Evaluating continual learning methods for streaming optimization scenarios
Benchmarking rehearsal-based methods against other CIL approaches
Innovation

Methods, ideas, or system contributions that make the work stand out.

Class Incremental Learning for algorithm selection
Benchmarks 8 continual learning methods
Rehearsal-based methods outperform others
🔎 Similar Papers
No similar papers found.