OpenGT: A Comprehensive Benchmark For Graph Transformers

📅 2025-06-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Graph Transformers (GTs) suffer from a lack of unified benchmarks, leading to ambiguity in applicable scenarios, unclear effectiveness of design choices, and inconsistent evaluation protocols. To address this, we introduce OpenGT—the first comprehensive, multi-task, multi-dataset benchmark specifically designed for GTs—systematically evaluating key architectural components: positional encodings (LAP, RW, SignNet), attention mechanisms (global, local, sparse), and graph-structure adaptation strategies. We propose a standardized evaluation framework that uncovers critical empirical insights, including the difficulty of cross-task transfer, the limitations of local attention, and the task-dependent efficacy of positional encodings. Furthermore, we release an open-source, extensible training and evaluation library, significantly improving fairness, reproducibility, and generalizability in GT research. The code is publicly available on GitHub and has been widely adopted by the community.

Technology Category

Application Category

📝 Abstract
Graph Transformers (GTs) have recently demonstrated remarkable performance across diverse domains. By leveraging attention mechanisms, GTs are capable of modeling long-range dependencies and complex structural relationships beyond local neighborhoods. However, their applicable scenarios are still underexplored, this highlights the need to identify when and why they excel. Furthermore, unlike GNNs, which predominantly rely on message-passing mechanisms, GTs exhibit a diverse design space in areas such as positional encoding, attention mechanisms, and graph-specific adaptations. Yet, it remains unclear which of these design choices are truly effective and under what conditions. As a result, the community currently lacks a comprehensive benchmark and library to promote a deeper understanding and further development of GTs. To address this gap, this paper introduces OpenGT, a comprehensive benchmark for Graph Transformers. OpenGT enables fair comparisons and multidimensional analysis by establishing standardized experimental settings and incorporating a broad selection of state-of-the-art GNNs and GTs. Our benchmark evaluates GTs from multiple perspectives, encompassing diverse tasks and datasets with varying properties. Through extensive experiments, our benchmark has uncovered several critical insights, including the difficulty of transferring models across task levels, the limitations of local attention, the efficiency trade-offs in several models, the application scenarios of specific positional encodings, and the preprocessing overhead of some positional encodings. We aspire for this work to establish a foundation for future graph transformer research emphasizing fairness, reproducibility, and generalizability. We have developed an easy-to-use library OpenGT for training and evaluating existing GTs. The benchmark code is available at https://github.com/eaglelab-zju/OpenGT.
Problem

Research questions and friction points this paper is trying to address.

Assess when and why Graph Transformers excel in diverse scenarios
Identify effective design choices in Graph Transformers' components
Provide a standardized benchmark for fair GT comparisons
Innovation

Methods, ideas, or system contributions that make the work stand out.

Standardized benchmark for Graph Transformers
Multidimensional analysis with diverse tasks
Easy-to-use library for GT evaluation
🔎 Similar Papers
No similar papers found.
J
Jiachen Tang
Zhejiang Key Laboratory of Accessible Perception and Intelligent Systems, College of Computer Science, Zhejiang University
Zhonghao Wang
Zhonghao Wang
Zhejiang University
Graph Machine Learning
Sirui Chen
Sirui Chen
University of Illinois Urbana-Champaign
Reinforcement LearningInformation Retrieval
S
Sheng Zhou
School of Software Technology, Zhejiang University
J
Jiawei Chen
Zhejiang Key Laboratory of Accessible Perception and Intelligent Systems, College of Computer Science, Zhejiang University
J
Jiajun Bu
Zhejiang Key Laboratory of Accessible Perception and Intelligent Systems, College of Computer Science, Zhejiang University