π€ AI Summary
This work addresses the challenge faced by agents in real-world scenarios of simultaneously adapting rapidly to concept drift while avoiding catastrophic forgetting in continual learning. To this end, we formally introduce and define the Streaming Continual Learning (SCL) paradigm, which integrates knowledge retention mechanisms from continual learning with the online updating and concept drift detection capabilities of streaming machine learning. This framework bridges two previously distinct research communities and fosters the development of hybrid approaches that balance rapid adaptation with long-term memory. Empirical results demonstrate that conventional methods from either continual learning or streaming machine learning alone struggle to achieve this balance, whereas SCL significantly outperforms them in non-stationary data streams, effectively reconciling fast adaptation with stable knowledge retention.
π Abstract
Continual Learning (CL) and Streaming Machine Learning (SML) study the ability of agents to learn from a stream of non-stationary data. Despite sharing some similarities, they address different and complementary challenges. While SML focuses on rapid adaptation after changes (concept drifts), CL aims to retain past knowledge when learning new tasks. After a brief introduction to CL and SML, we discuss Streaming Continual Learning (SCL), an emerging paradigm providing a unifying solution to real-world problems, which may require both SML and CL abilities. We claim that SCL can i) connect the CL and SML communities, motivating their work towards the same goal, and ii) foster the design of hybrid approaches that can quickly adapt to new information (as in SML) without forgetting previous knowledge (as in CL). We conclude the paper with a motivating example and a set of experiments, highlighting the need for SCL by showing how CL and SML alone struggle in achieving rapid adaptation and knowledge retention.