The Unreasonable Effectiveness of Randomized Representations in Online Continual Graph Learning

📅 2025-10-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Online continual graph learning (OCGL) confronts dual challenges of distribution shift and catastrophic forgetting, without access to replay mechanisms or offline retraining. To address this, we propose a replay-free, regularization-free lightweight continual learning paradigm: a fixed, randomly initialized graph encoder—whose parameters remain entirely frozen throughout training—leverages neighborhood aggregation to produce stable, expressive node embeddings; concurrently, only a lightweight classifier is updated online. This architectural design inherently eliminates representation drift at the source, substantially mitigating forgetting. Evaluated on multiple benchmark datasets, our method surpasses existing state-of-the-art approaches by up to 30% in performance, approaching the upper bound of joint offline training—while requiring no memory buffer. The approach thus achieves an exceptional balance of simplicity, stability, and computational efficiency.

Technology Category

Application Category

📝 Abstract
Catastrophic forgetting is one of the main obstacles for Online Continual Graph Learning (OCGL), where nodes arrive one by one, distribution drifts may occur at any time and offline training on task-specific subgraphs is not feasible. In this work, we explore a surprisingly simple yet highly effective approach for OCGL: we use a fixed, randomly initialized encoder to generate robust and expressive node embeddings by aggregating neighborhood information, training online only a lightweight classifier. By freezing the encoder, we eliminate drifts of the representation parameters, a key source of forgetting, obtaining embeddings that are both expressive and stable. When evaluated across several OCGL benchmarks, despite its simplicity and lack of memory buffer, this approach yields consistent gains over state-of-the-art methods, with surprising improvements of up to 30% and performance often approaching that of the joint offline-training upper bound. These results suggest that in OCGL, catastrophic forgetting can be minimized without complex replay or regularization by embracing architectural simplicity and stability.
Problem

Research questions and friction points this paper is trying to address.

Address catastrophic forgetting in online continual graph learning
Generate stable node embeddings using frozen random encoder
Train lightweight classifier online without memory buffers
Innovation

Methods, ideas, or system contributions that make the work stand out.

Fixed random encoder generates stable node embeddings
Online training only lightweight classifier for efficiency
No memory buffer needed to prevent catastrophic forgetting
🔎 Similar Papers
No similar papers found.