🤖 AI Summary
Existing graph foundation models rely heavily on handcrafted features and struggle to capture complex structural patterns in graphs. To address this, we propose the first generative foundation model specifically designed for graph-structured data. Our approach first constructs a synthetic attributed graph prior distribution grounded in the stochastic block model and preferential attachment mechanisms, then integrates a graph-aware causal model to generate structural dependencies in node attributes and labels. Second, we adopt the LimiX architecture—enhanced with a graph neighborhood attention aggregation layer—and pretrain it on large-scale synthetic graphs. The resulting model demonstrates strong in-context learning capabilities for node-level prediction tasks. When fine-tuned on real-world graphs containing up to 50,000 nodes, it significantly outperforms both G2T-FM and task-specific GNNs trained from scratch, achieving state-of-the-art performance.
📝 Abstract
Foundation models pretrained on large-scale datasets have transformed such fields as natural language processing and computer vision, but their application to graph data remains limited. Recently emerged graph foundation models, such as G2T-FM, utilize tabular foundation models for graph tasks and were shown to significantly outperform prior attempts to create GFMs. However, these models primarily rely on hand-crafted graph features, limiting their ability to learn complex graph-specific patterns. In this work, we propose GraphPFN: a prior-data fitted network for node-level prediction. First, we design a prior distribution of synthetic attributed graphs. For graph structure generation, we use a novel combination of multiple stochastic block models and a preferential attachment process. We then apply graph-aware structured causal models to generate node attributes and targets. This procedure allows us to efficiently generate a wide range of realistic graph datasets. Then, we augment the tabular foundation model LimiX with attention-based graph neighborhood aggregation layers and train it on synthetic graphs sampled from our prior, allowing the model to capture graph structural dependencies not present in tabular data. On diverse real-world graph datasets with up to 50,000 nodes, GraphPFN shows strong in-context learning performance and achieves state-of-the-art results after finetuning, outperforming both G2T-FM and task-specific GNNs trained from scratch on most datasets. More broadly, our work demonstrates that pretraining on synthetic graphs from a well-designed prior distribution is an effective strategy for building graph foundation models.