🤖 AI Summary
This work addresses the limitations of existing graph foundation models, which rely on modality-specific encoders and struggle with scenarios involving pre-quantized graphs or inaccessible raw data. The authors propose the first modality-agnostic graph context learning framework that operates without assumptions about input modalities. Their approach extracts domain-specific features via gradient fingerprinting, aligns pre-encoded representations with labels through lightweight feature transformation, and introduces a dual-prompt-aware attention mechanism to enable few-shot cross-domain inference. Notably, the method achieves instant prediction on unseen graph domains without any parameter updates. Extensive experiments demonstrate its superior few-shot performance and cross-domain generalization capabilities across multiple heterogeneous graph domains, significantly outperforming current state-of-the-art approaches.
📝 Abstract
In-context learning (ICL) converts static encoders into task-conditioned reasoners, enabling adaptation to new data from just a few examples without updating pretrained parameters. This capability is essential for graph foundation models (GFMs) to approach LLM-level generality. Yet current GFMs struggle with cross-domain alignment, typically relying on modality-specific encoders that fail when graphs are pre-vectorized or raw data is inaccessible. In this paper, we introduce Modality-Free Graph In-context Alignment (MF-GIA), a framework that makes a pretrained graph encoder promptable for few-shot prediction across heterogeneous domains without modality assumptions. MF-GIA captures domain characteristics through gradient fingerprints, which parameterize lightweight transformations that align pre-encoded features and indexed labels into unified semantic spaces. During pretraining, a dual prompt-aware attention mechanism with episodic objective learns to match queries against aligned support examples to establish prompt-based reasoning capabilities. At inference, MF-GIA performs parameter-update-free adaptation using only a few-shot support set to trigger cross-domain alignment and enable immediate prediction on unseen domains. Experiments demonstrate that MF-GIA achieves superior few-shot performance across diverse graph domains and strong generalization to unseen domains.