Generalizable Insights for Graph Transformers in Theory and Practice

📅 2025-11-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing graph Transformer (GT) architectures lack a unified theoretical framework governing attention mechanisms, positional encoding, and expressive power; theoretical analyses are often tied to specific architectural choices and lack large-scale empirical validation, limiting generalizability. To address this, we propose the Generalized Distance Transformer (GDT), the first GT framework that unifies attention computation and positional encoding under a principled graph distance metric—yielding an interpretable, scalable, and theoretically grounded architecture. GDT is rigorously analyzed for fine-grained expressivity and pretrained at scale across diverse domains (8M graphs, 270M tokens), enabling cross-modal few-shot transfer—e.g., image detection, molecular property prediction, and code summarization—without task-specific fine-tuning, yet consistently outperforming SOTA. Our core contributions are: (i) the first unified GT paradigm combining theoretical rigor with empirical robustness; (ii) systematic distillation of domain- and scale-agnostic design principles; and (iii) comprehensive validation of strong generalization and performance consistency across multiple tasks.

Technology Category

Application Category

📝 Abstract
Graph Transformers (GTs) have shown strong empirical performance, yet current architectures vary widely in their use of attention mechanisms, positional embeddings (PEs), and expressivity. Existing expressivity results are often tied to specific design choices and lack comprehensive empirical validation on large-scale data. This leaves a gap between theory and practice, preventing generalizable insights that exceed particular application domains. Here, we propose the Generalized-Distance Transformer (GDT), a GT architecture using standard attention that incorporates many advancements for GTs from recent years, and develop a fine-grained understanding of the GDT's representation power in terms of attention and PEs. Through extensive experiments, we identify design choices that consistently perform well across various applications, tasks, and model scales, demonstrating strong performance in a few-shot transfer setting without fine-tuning. Our evaluation covers over eight million graphs with roughly 270M tokens across diverse domains, including image-based object detection, molecular property prediction, code summarization, and out-of-distribution algorithmic reasoning. We distill our theoretical and practical findings into several generalizable insights about effective GT design, training, and inference.
Problem

Research questions and friction points this paper is trying to address.

Analyzing Graph Transformers' theoretical expressivity and empirical performance gaps
Identifying universally effective design choices across diverse graph applications
Bridging theoretical understanding with practical performance through large-scale evaluation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Generalized-Distance Transformer architecture with standard attention
Fine-grained analysis of representation power and embeddings
Evaluated design choices across diverse domains and scales
🔎 Similar Papers
No similar papers found.
T
Timo Stoll
Department of Computer Science, RWTH Aachen University, Aachen, Germany
L
Luis Muller
Department of Computer Science, RWTH Aachen University, Aachen, Germany
Christopher Morris
Christopher Morris
RWTH Aachen University
Machine learning on graphsgraph neural networksmachine learning for discrete algorithms