🤖 AI Summary
To address the lack of formal definition and systematic investigation into online continual learning for dynamic graph data streams, this paper introduces the first formal characterization of “online continual graph learning,” emphasizing topology-aware batch efficiency and real-time prediction capability. We propose a unified framework that tightly integrates graph neural networks with continual learning mechanisms, enabling incremental task learning under non-i.i.d. graph stream distributions while mitigating catastrophic forgetting. Furthermore, we construct the first standardized benchmark—comprising diverse real-world graph streams and a unified evaluation protocol—to rigorously assess mainstream continual learning methods in graph streaming settings. This work bridges critical gaps in theoretical modeling, algorithmic design, and empirical evaluation, establishing foundational principles for sustainable online graph learning.
📝 Abstract
The aim of Continual Learning (CL) is to learn new tasks incrementally while avoiding catastrophic forgetting. Online Continual Learning (OCL) specifically focuses on learning efficiently from a continuous stream of data with shifting distribution. While recent studies explore Continual Learning on graphs exploiting Graph Neural Networks (GNNs), only few of them focus on a streaming setting. Yet, many real-world graphs evolve over time, often requiring timely and online predictions. Current approaches, however, are not well aligned with the standard OCL setting, partly due to the lack of a clear definition of online Continual Learning on graphs. In this work, we propose a general formulation for online Continual Learning on graphs, emphasizing the efficiency requirements on batch processing over the graph topology, and providing a well-defined setting for systematic model evaluation. Finally, we introduce a set of benchmarks and report the performance of several methods in the CL literature, adapted to our setting.