🤖 AI Summary
In continual learning, the order of task sequences significantly impacts model performance, yet existing work lacks systematic modeling and exploitation of inter-task transferability. Method: This paper formally defines and quantifies forward and backward cumulative transferability across task sequences, introducing a bidirectional transferability measurement framework. Based on this metric, we propose a provably superior task ordering optimization method—guaranteed to outperform random sequencing—and enable sequence-level performance prediction and task dependency modeling. Results: Extensive experiments on standard benchmarks (e.g., Split-CIFAR100, PermutedMNIST) demonstrate that our approach consistently improves final accuracy by an average of +2.3% while effectively mitigating catastrophic forgetting. The core contribution is establishing a transferability-driven task orchestration paradigm, providing both an interpretable, optimization-aware theoretical foundation and practical tools for principled task sequence design in continual learning.
📝 Abstract
In continual learning, understanding the properties of task sequences and their relationships to model performance is important for developing advanced algorithms with better accuracy. However, efforts in this direction remain underdeveloped despite encouraging progress in methodology development. In this work, we investigate the impacts of sequence transferability on continual learning and propose two novel measures that capture the total transferability of a task sequence, either in the forward or backward direction. Based on the empirical properties of these measures, we then develop a new method for the task order selection problem in continual learning. Our method can be shown to offer a better performance than the conventional strategy of random task selection.