FLASH: Flexible Learning of Adaptive Sampling from History in Temporal Graph Neural Networks

📅 2025-04-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In dynamic graph future link prediction, existing Temporal Graph Neural Networks (TGNNs) rely on static neighbor sampling strategies (e.g., uniform or recent-neighbor sampling), which fail to adapt to evolving graph topology and lead to inefficient historical information aggregation. To address this, we propose a graph-adaptive, learnable neighbor sampling mechanism: (i) we formulate neighbor selection as an end-to-end differentiable ranking task—first in the literature—and theoretically prove that conventional heuristic sampling degrades performance, while our framework subsumes and generalizes uniform/recent strategies; (ii) we design a self-supervised ranking loss, a lightweight TGNN adapter, and a graph-structure-aware importance modeling module. Extensive experiments demonstrate average AUC improvements of 3.2–5.8 percentage points across multiple benchmarks, with negligible inference overhead (<2% increase).

Technology Category

Application Category

📝 Abstract
Aggregating temporal signals from historic interactions is a key step in future link prediction on dynamic graphs. However, incorporating long histories is resource-intensive. Hence, temporal graph neural networks (TGNNs) often rely on historical neighbors sampling heuristics such as uniform sampling or recent neighbors selection. These heuristics are static and fail to adapt to the underlying graph structure. We introduce FLASH, a learnable and graph-adaptive neighborhood selection mechanism that generalizes existing heuristics. FLASH integrates seamlessly into TGNNs and is trained end-to-end using a self-supervised ranking loss. We provide theoretical evidence that commonly used heuristics hinders TGNNs performance, motivating our design. Extensive experiments across multiple benchmarks demonstrate consistent and significant performance improvements for TGNNs equipped with FLASH.
Problem

Research questions and friction points this paper is trying to address.

Adaptive sampling from history in dynamic graphs
Overcoming static heuristics in temporal GNNs
Improving link prediction with learnable selection
Innovation

Methods, ideas, or system contributions that make the work stand out.

Learnable adaptive neighborhood selection mechanism
Self-supervised ranking loss for training
Seamless integration into temporal GNNs
🔎 Similar Papers
No similar papers found.