🤖 AI Summary
To address catastrophic forgetting caused by feature drift in non-exemplar continual graph learning (NECGL), this paper proposes an instance-prototype affinity learning framework. Methodologically, it introduces two core innovations: (1) Topology-Integrated Gaussian Prototypes (TIGP), which incorporates graph structural priors into Gaussian mixture prototype modeling to enhance prototype robustness against distribution shifts; and (2) Instance-Prototype Affinity Distillation (IPAD), which jointly leverages topology-aware regularization and decision-boundary awareness to ensure stable knowledge transfer across tasks. Crucially, the approach operates without storing raw samples, thereby circumventing privacy concerns and memory bottlenecks inherent in exemplar-based methods. Extensive evaluation on four node classification benchmarks demonstrates consistent superiority over state-of-the-art methods, achieving the optimal trade-off between plasticity and stability in continual graph learning.
📝 Abstract
Graph Neural Networks (GNN) endure catastrophic forgetting, undermining their capacity to preserve previously acquired knowledge amid the assimilation of novel information. Rehearsal-based techniques revisit historical examples, adopted as a principal strategy to alleviate this phenomenon. However, memory explosion and privacy infringements impose significant constraints on their utility. Non-Exemplar methods circumvent the prior issues through Prototype Replay (PR), yet feature drift presents new challenges. In this paper, our empirical findings reveal that Prototype Contrastive Learning (PCL) exhibits less pronounced drift than conventional PR. Drawing upon PCL, we propose Instance-Prototype Affinity Learning (IPAL), a novel paradigm for Non-Exemplar Continual Graph Learning (NECGL). Exploiting graph structural information, we formulate Topology-Integrated Gaussian Prototypes (TIGP), guiding feature distributions towards high-impact nodes to augment the model's capacity for assimilating new knowledge. Instance-Prototype Affinity Distillation (IPAD) safeguards task memory by regularizing discontinuities in class relationships. Moreover, we embed a Decision Boundary Perception (DBP) mechanism within PCL, fostering greater inter-class discriminability. Evaluations on four node classification benchmark datasets demonstrate that our method outperforms existing state-of-the-art methods, achieving a better trade-off between plasticity and stability.