EvoFormer: Learning Dynamic Graph-Level Representations with Structural and Temporal Bias Correction

📅 2025-08-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Dynamic graph embedding faces two key challenges: structural access bias—where random walks over-sample high-degree nodes, introducing redundant noise into representations—and mutation evolution blindness—where existing temporal modeling fails to capture abrupt structural changes. To address these, we propose a structure-aware and evolution-sensitive dynamic graph-level embedding framework. First, we design a structural-role-aware positional encoding to mitigate structural bias. Second, we introduce a three-stage temporal modeling strategy integrating timestamp classification of random walks, graph-level temporal segmentation, and edge evolution prediction, enhanced by a Segment-Aware Temporal Self-Attention mechanism to improve sensitivity to sudden evolutionary shifts. Evaluated on five benchmark datasets, our method achieves state-of-the-art performance on graph similarity ranking, temporal anomaly detection, and temporal segmentation—marking the first systematic correction of both structural and temporal biases in dynamic graph learning.

Technology Category

Application Category

📝 Abstract
Dynamic graph-level embedding aims to capture structural evolution in networks, which is essential for modeling real-world scenarios. However, existing methods face two critical yet under-explored issues: Structural Visit Bias, where random walk sampling disproportionately emphasizes high-degree nodes, leading to redundant and noisy structural representations; and Abrupt Evolution Blindness, the failure to effectively detect sudden structural changes due to rigid or overly simplistic temporal modeling strategies, resulting in inconsistent temporal embeddings. To overcome these challenges, we propose EvoFormer, an evolution-aware Transformer framework tailored for dynamic graph-level representation learning. To mitigate Structural Visit Bias, EvoFormer introduces a Structure-Aware Transformer Module that incorporates positional encoding based on node structural roles, allowing the model to globally differentiate and accurately represent node structures. To overcome Abrupt Evolution Blindness, EvoFormer employs an Evolution-Sensitive Temporal Module, which explicitly models temporal evolution through a sequential three-step strategy: (I) Random Walk Timestamp Classification, generating initial timestamp-aware graph-level embeddings; (II) Graph-Level Temporal Segmentation, partitioning the graph stream into segments reflecting structurally coherent periods; and (III) Segment-Aware Temporal Self-Attention combined with an Edge Evolution Prediction task, enabling the model to precisely capture segment boundaries and perceive structural evolution trends, effectively adapting to rapid temporal shifts. Extensive evaluations on five benchmark datasets confirm that EvoFormer achieves state-of-the-art performance in graph similarity ranking, temporal anomaly detection, and temporal segmentation tasks, validating its effectiveness in correcting structural and temporal biases.
Problem

Research questions and friction points this paper is trying to address.

Addresses Structural Visit Bias in dynamic graph embeddings
Overcomes Abrupt Evolution Blindness in temporal modeling
Corrects biased node representations and sudden change detection
Innovation

Methods, ideas, or system contributions that make the work stand out.

Structure-Aware Transformer with node role encoding
Evolution-Sensitive Temporal Module with three-step strategy
Segment-Aware Temporal Self-Attention with edge prediction
🔎 Similar Papers
No similar papers found.
Haodi Zhong
Haodi Zhong
Xidian University
Data miningBiomedical informatics
L
Liuxin Zou
Xidian University, Xian, China
D
Di Wang
Xidian University, Xian, China
B
Bo Wang
Xidian University, Xian, China
Z
Zhenxing Niu
Xidian University, Xian, China
Q
Quan Wang
Xidian University, Xian, China