TAAM:Inductive Graph-Class Incremental Learning with Task-Aware Adaptive Modulation

📅 2026-02-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses key challenges in graph continual learning—namely, the memory overhead and privacy risks associated with replay strategies, the stability-plasticity trade-off, unknown task identities, and data leakage in existing benchmarks—by proposing a Task-Aware Adaptive Modulation (TAAM) framework. TAAM trains and freezes lightweight Neural Synaptic Modulators (NSMs) atop a fixed GNN backbone for each new task, enabling node-level computational flow modulation without replay to effectively mitigate catastrophic forgetting. Additionally, the authors introduce and theoretically justify an Anchored Multi-hop Propagation (AMP) mechanism to handle unknown task IDs and establish a more rigorous inductive evaluation benchmark. Extensive experiments across eight datasets demonstrate that TAAM significantly outperforms state-of-the-art methods in preserving prior knowledge, adapting to new tasks, and operating under unknown task identity scenarios.

Technology Category

Application Category

📝 Abstract
Graph Continual Learning (GCL) aims to solve the challenges of streaming graph data. However, current methods often depend on replay-based strategies, which raise concerns like memory limits and privacy issues, while also struggling to resolve the stability-plasticity dilemma. In this paper, we suggest that lightweight, task-specific modules can effectively guide the reasoning process of a fixed GNN backbone. Based on this idea, we propose Task-Aware Adaptive Modulation (TAAM). The key component of TAAM is its lightweight Neural Synapse Modulators (NSMs). For each new task, a dedicated NSM is trained and then frozen, acting as an"expert module."These modules perform detailed, node-attentive adaptive modulation on the computational flow of a shared GNN backbone. This setup ensures that new knowledge is kept within compact, task-specific modules, naturally preventing catastrophic forgetting without using any data replay. Additionally, to address the important challenge of unknown task IDs in real-world scenarios, we propose and theoretically prove a novel method named Anchored Multi-hop Propagation (AMP). Notably, we find that existing GCL benchmarks have flaws that can cause data leakage and biased evaluations. Therefore, we conduct all experiments in a more rigorous inductive learning scenario. Extensive experiments show that TAAM comprehensively outperforms state-of-the-art methods across eight datasets. Code and Datasets are available at: https://github.com/1iuJT/TAAM_AAMAS2026.
Problem

Research questions and friction points this paper is trying to address.

Graph Continual Learning
catastrophic forgetting
task-agnostic learning
data replay
inductive learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Task-Aware Adaptive Modulation
Neural Synapse Modulators
Graph Continual Learning
Replay-Free Learning
Inductive Learning
🔎 Similar Papers
No similar papers found.