A Brain Graph Foundation Model: Pre-Training and Prompt-Tuning for Any Atlas and Disorder

📅 2025-05-31
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing brain graph models lack generalizability across diverse brain atlases and unseen neuropsychiatric disorders. Method: We propose BrainGFM, the first foundation model for multi-atlas, cross-disease brain graphs, trained via graph contrastive learning and graph masked autoencoding on 25,000+ fMRI scans from 27 datasets, covering 25 neuropsychiatric disorders and 8 brain atlases. BrainGFM introduces a novel graph-structured foundation model paradigm enabling zero- or few-shot adaptation to arbitrary atlases and previously unseen diseases. It integrates graph prompts with language prompts and employs meta-learning to optimize graph prompts for language-guided cross-disease generalization. Contribution/Results: BrainGFM achieves significant improvements in zero- and few-shot disease classification across all 25 disorders, attaining an average AUC of >0.82 on novel diseases. The code is publicly available.

Technology Category

Application Category

📝 Abstract
As large language models (LLMs) continue to revolutionize AI research, there is a growing interest in building large-scale brain foundation models to advance neuroscience. While most existing brain foundation models are pre-trained on time-series signals or region-of-interest (ROI) features, we propose a novel graph-based pre-training paradigm for constructing a brain graph foundation model. In this paper, we introduce the Brain Graph Foundation Model, termed BrainGFM, a unified framework that leverages graph contrastive learning and graph masked autoencoders for large-scale fMRI-based pre-training. BrainGFM is pre-trained on a diverse mixture of brain atlases with varying parcellations, significantly expanding the pre-training corpus and enhancing the model's ability to generalize across heterogeneous fMRI-derived brain representations. To support efficient and versatile downstream transfer, we integrate both graph prompts and language prompts into the model design, enabling BrainGFM to flexibly adapt to a wide range of atlases, neurological and psychiatric disorders, and task settings. Furthermore, we employ meta-learning to optimize the graph prompts, facilitating strong generalization to previously unseen disorders under both few-shot and zero-shot learning conditions via language-guided prompting. BrainGFM is pre-trained on 27 neuroimaging datasets spanning 25 common neurological and psychiatric disorders, encompassing 2 types of brain atlases (functional and anatomical) across 8 widely-used parcellations, and covering over 25,000 subjects, 60,000 fMRI scans, and a total of 400,000 graph samples aggregated across all atlases and parcellations. The code is available at: https://github.com/weixinxu666/BrainGFM
Problem

Research questions and friction points this paper is trying to address.

Develops a graph-based brain foundation model for neuroscience
Enables generalization across diverse brain atlases and disorders
Integrates graph and language prompts for flexible downstream tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Graph-based pre-training for brain models
Uses graph contrastive and masked autoencoders
Integrates graph and language prompts
🔎 Similar Papers
No similar papers found.