🤖 AI Summary
Existing graph Transformer (GT) architectures lack a unified theoretical framework governing attention mechanisms, positional encoding, and expressive power; theoretical analyses are often tied to specific architectural choices and lack large-scale empirical validation, limiting generalizability. To address this, we propose the Generalized Distance Transformer (GDT), the first GT framework that unifies attention computation and positional encoding under a principled graph distance metric—yielding an interpretable, scalable, and theoretically grounded architecture. GDT is rigorously analyzed for fine-grained expressivity and pretrained at scale across diverse domains (8M graphs, 270M tokens), enabling cross-modal few-shot transfer—e.g., image detection, molecular property prediction, and code summarization—without task-specific fine-tuning, yet consistently outperforming SOTA. Our core contributions are: (i) the first unified GT paradigm combining theoretical rigor with empirical robustness; (ii) systematic distillation of domain- and scale-agnostic design principles; and (iii) comprehensive validation of strong generalization and performance consistency across multiple tasks.
📝 Abstract
Graph Transformers (GTs) have shown strong empirical performance, yet current architectures vary widely in their use of attention mechanisms, positional embeddings (PEs), and expressivity. Existing expressivity results are often tied to specific design choices and lack comprehensive empirical validation on large-scale data. This leaves a gap between theory and practice, preventing generalizable insights that exceed particular application domains. Here, we propose the Generalized-Distance Transformer (GDT), a GT architecture using standard attention that incorporates many advancements for GTs from recent years, and develop a fine-grained understanding of the GDT's representation power in terms of attention and PEs. Through extensive experiments, we identify design choices that consistently perform well across various applications, tasks, and model scales, demonstrating strong performance in a few-shot transfer setting without fine-tuning. Our evaluation covers over eight million graphs with roughly 270M tokens across diverse domains, including image-based object detection, molecular property prediction, code summarization, and out-of-distribution algorithmic reasoning. We distill our theoretical and practical findings into several generalizable insights about effective GT design, training, and inference.