GraphKeeper: Graph Domain-Incremental Learning via Knowledge Disentanglement and Preservation

📅 2025-10-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper presents the first systematic study of catastrophic forgetting in graph-domain incremental learning (Domain-IL). Addressing the limitations of existing methods—which are confined to single-domain task/class incremental settings and ill-suited for continual updates across heterogeneous graph domains—we propose a knowledge disentanglement and unbiased preservation framework. Our approach features: (i) a domain-aware, parameter-efficient fine-tuning mechanism that jointly optimizes intra-domain discriminability and inter-domain invariance; (ii) disentangled embedding-space constraints and bias-free knowledge distillation to mitigate embedding shift and decision boundary drift; and (iii) support for unknown domain detection. Extensive experiments on multiple graph benchmarks demonstrate substantial improvements over state-of-the-art methods, with average accuracy gains of 6.5%–16.6%, negligible forgetting rates, and seamless plug-and-play compatibility with mainstream graph foundation models.

Technology Category

Application Category

📝 Abstract
Graph incremental learning (GIL), which continuously updates graph models by sequential knowledge acquisition, has garnered significant interest recently. However, existing GIL approaches focus on task-incremental and class-incremental scenarios within a single domain. Graph domain-incremental learning (Domain-IL), aiming at updating models across multiple graph domains, has become critical with the development of graph foundation models (GFMs), but remains unexplored in the literature. In this paper, we propose Graph Domain-Incremental Learning via Knowledge Dientanglement and Preservation (GraphKeeper), to address catastrophic forgetting in Domain-IL scenario from the perspectives of embedding shifts and decision boundary deviations. Specifically, to prevent embedding shifts and confusion across incremental graph domains, we first propose the domain-specific parameter-efficient fine-tuning together with intra- and inter-domain disentanglement objectives. Consequently, to maintain a stable decision boundary, we introduce deviation-free knowledge preservation to continuously fit incremental domains. Additionally, for graphs with unobservable domains, we perform domain-aware distribution discrimination to obtain precise embeddings. Extensive experiments demonstrate the proposed GraphKeeper achieves state-of-the-art results with 6.5%~16.6% improvement over the runner-up with negligible forgetting. Moreover, we show GraphKeeper can be seamlessly integrated with various representative GFMs, highlighting its broad applicative potential.
Problem

Research questions and friction points this paper is trying to address.

Addressing catastrophic forgetting in graph domain-incremental learning scenarios
Preventing embedding shifts and decision boundary deviations across domains
Enabling continuous model updates across multiple graph domains
Innovation

Methods, ideas, or system contributions that make the work stand out.

Domain-specific fine-tuning with disentanglement objectives
Deviation-free knowledge preservation for stable boundaries
Domain-aware distribution discrimination for unobservable graphs
🔎 Similar Papers
2024-07-27IEEE Transactions on Pattern Analysis and Machine IntelligenceCitations: 2