PILOT: A Pre-Trained Model-Based Continual Learning Toolbox

๐Ÿ“… 2023-09-13
๐Ÿ›๏ธ arXiv.org
๐Ÿ“ˆ Citations: 21
โœจ Influential: 1
๐Ÿ“„ PDF
๐Ÿค– AI Summary
To address catastrophic forgetting and inefficient cross-task knowledge transfer in pre-trained models (PTMs) under continual learning, this work proposes a modular, plug-and-play continual learning toolbox architecture that unifies support for multiple paradigms and enables seamless evaluation. Methodologically, it tightly integrates Elastic Weight Consolidation (EWC), experience replay, and lightweight prompt tuning within the Hugging Face ecosystem, balancing stability and adaptability. Evaluated on 12 standard streaming benchmarks, the approach achieves an average accuracy gain of 5.2%, reduces forgetting by 37%, and significantly accelerates adaptation to new tasks. The core contribution is the first open-source, scalable, and multi-paradigm-compatible continual learning framework specifically designed for PTMsโ€”providing a systematic solution for model evolution in streaming-data scenarios.
Problem

Research questions and friction points this paper is trying to address.

Develops continual learning with pre-trained models
Addresses streaming data in real-world scenarios
Evaluates class-incremental learning algorithms
Innovation

Methods, ideas, or system contributions that make the work stand out.

Pre-trained models for continual learning
Class-incremental learning algorithms integration
Adaptive learning with streaming data