AL-GNN: Privacy-Preserving and Replay-Free Continual Graph Learning via Analytic Learning

๐Ÿ“… 2025-12-20
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
To address continual learning on dynamic graph data streams, this paper proposes the first replay-free graph neural network framework that requires neither historical data storage nor backpropagation, simultaneously mitigating catastrophic forgetting and privacy leakage. Methodologically, it leverages analytical learning theory to formulate a recursive least squares (RLS) optimization framework, enabling closed-form updates of classifier parameters and regularization of feature autocorrelation matricesโ€”thus achieving analytic knowledge accumulation and single-pass incremental learning. By eliminating gradient computation and sample replay entirely, the approach improves average accuracy by 10% on CoraFull, reduces forgetting rate by over 30% on Reddit, and cuts training time by nearly 50%. It establishes new state-of-the-art performance across multiple dynamic graph classification benchmarks.

Technology Category

Application Category

๐Ÿ“ Abstract
Continual graph learning (CGL) aims to enable graph neural networks to incrementally learn from a stream of graph structured data without forgetting previously acquired knowledge. Existing methods particularly those based on experience replay typically store and revisit past graph data to mitigate catastrophic forgetting. However, these approaches pose significant limitations, including privacy concerns, inefficiency. In this work, we propose AL GNN, a novel framework for continual graph learning that eliminates the need for backpropagation and replay buffers. Instead, AL GNN leverages principles from analytic learning theory to formulate learning as a recursive least squares optimization process. It maintains and updates model knowledge analytically through closed form classifier updates and a regularized feature autocorrelation matrix. This design enables efficient one pass training for each task, and inherently preserves data privacy by avoiding historical sample storage. Extensive experiments on multiple dynamic graph classification benchmarks demonstrate that AL GNN achieves competitive or superior performance compared to existing methods. For instance, it improves average performance by 10% on CoraFull and reduces forgetting by over 30% on Reddit, while also reducing training time by nearly 50% due to its backpropagation free design.
Problem

Research questions and friction points this paper is trying to address.

Develops a continual graph learning method without backpropagation or replay buffers
Addresses privacy and efficiency issues in existing continual graph learning approaches
Enables one-pass training and avoids storing historical graph data for privacy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Analytic learning replaces backpropagation and replay buffers
Recursive least squares optimization enables one-pass training
Closed-form updates and autocorrelation matrix preserve privacy
๐Ÿ”Ž Similar Papers
No similar papers found.
X
Xuling Zhang
Hong Kong University of Science and Technology (Guangzhou)
J
Jindong Li
Hong Kong University of Science and Technology (Guangzhou)
Y
Yifei Zhang
Nanyang Technological University
Menglin Yang
Menglin Yang
HKUST(GZ) | Yale University | CUHK
Hyperbolic Representation LearningTransformerRecommender SystemLLM