FCN-LLM: Empower LLM for Brain Functional Connectivity Network Understanding via Graph-level Multi-task Instruction Tuning

πŸ“… 2026-03-01
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the challenge of aligning brain functional connectivity networks (FCNs) with textual semantics, a limitation that hinders large language models (LLMs) from directly interpreting FCNs. To bridge this gap, the authors propose a novel graph-level multi-task instruction tuning paradigm: a multi-scale FCN encoder is constructed to project its features into the semantic space of an LLM, followed by multi-stage joint training guided by 19 diverse instructions encompassing demographic, phenotypic, and psychiatric attributes. This approach achieves, for the first time, end-to-end comprehension of resting-state fMRI–derived FCNs by an LLM. Evaluated on large-scale, multi-site datasets, the method demonstrates exceptional zero-shot generalization performance, significantly outperforming conventional supervised models and existing brain network foundation models, thereby enhancing flexibility and interpretability in neuroscientific applications.

Technology Category

Application Category

πŸ“ Abstract
Large Language Models have achieved remarkable success in language understanding and reasoning, and their multimodal extensions enable comprehension of images, video, and audio. Inspired by this, foundation models for brain functional connectivity networks derived from resting-state fMRI have shown promise in clinical tasks. However, existing methods do not align FCNs with the text modality, limiting the ability of LLMs to directly understand FCNs. To address this, we propose FCN-LLM, a framework that enables LLMs to understand FCNs through graph-level, multi-task instruction tuning. Our approach employs a multi-scale FCN encoder capturing brain-region, functional subnetwork, and whole-brain features, projecting them into the semantic space of LLM. We design multi-paradigm instruction tasks covering 19 subject-specific attributes across demographics, phenotypes, and psychiatric conditions. A multi-stage learning strategy first aligns FCN embeddings with the LLM and then jointly fine-tunes the entire model to capture high-level semantic information. Experiments on a large-scale, multi-site FCN database show that FCN-LLM achieves strong zero-shot generalization on unseen datasets, outperforming conventional supervised and foundation models. This work introduces a new paradigm for integrating brain functional networks with LLMs, offering a flexible and interpretable framework for neuroscience.
Problem

Research questions and friction points this paper is trying to address.

Functional Connectivity Network
Large Language Model
Multimodal Alignment
Brain Imaging
Resting-state fMRI
Innovation

Methods, ideas, or system contributions that make the work stand out.

functional connectivity network
large language model
instruction tuning
multi-task learning
zero-shot generalization
πŸ”Ž Similar Papers
No similar papers found.