🤖 AI Summary
Graph Transformers (GTs) suffer from limited performance due to the absence of efficient tokenizers for graph-structured data.
Method: This paper introduces the Graph Quantization Tokenizer (GQT), the first approach to adapt residual vector quantization (RVQ) to graphs, enabling hierarchical and robust discrete graph token generation. GQT decouples tokenizer training from the Transformer backbone and integrates multi-task graph self-supervised learning with graph token modulation to enhance generalization and efficiency. It supports both homogeneous and heterogeneous large-scale graphs and is plug-and-play without architectural modifications.
Contribution/Results: GQT establishes new state-of-the-art (SOTA) results on 20 out of 22 benchmark datasets, significantly reduces memory overhead, and substantially improves cross-domain generalization capability.
📝 Abstract
Transformers serve as the backbone architectures of Foundational Models, where domain-specific tokenizers allow them to adapt to various domains. Graph Transformers (GTs) have recently emerged as leading models in geometric deep learning, outperforming Graph Neural Networks (GNNs) in various graph learning tasks. However, the development of tokenizers for graphs has lagged behind other modalities. To address this, we introduce GQT ( extbf{G}raph extbf{Q}uantized extbf{T}okenizer), which decouples tokenizer training from Transformer training by leveraging multi-task graph self-supervised learning, yielding robust and generalizable graph tokens. Furthermore, the GQT utilizes Residual Vector Quantization (RVQ) to learn hierarchical discrete tokens, resulting in significantly reduced memory requirements and improved generalization capabilities. By combining the GQT with token modulation, a Transformer encoder achieves state-of-the-art performance on 20 out of 22 benchmarks, including large-scale homophilic and heterophilic datasets.