Toward a Multi-View Brain Network Foundation Model: Cross-View Consistency Learning Across Arbitrary Atlases

๐Ÿ“… 2026-03-20
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Existing foundational models for brain networks are limited by their reliance on specific atlases, insufficient exploitation of multi-view information, and weak integration of anatomical priors, hindering generalization across arbitrary brain parcellations. This work proposes MV-BrainFM, the first framework to achieve consistent multi-view representation learning under any brain atlas. It leverages an anatomical distanceโ€“guided Transformer to model inter-regional interactions, aligns multi-atlas networks from the same subject in a shared latent space, and employs a unified unsupervised multi-view pretraining paradigm to jointly learn from multiple datasets and atlases. Evaluated on 17 fMRI datasets encompassing over 20,000 subjects, MV-BrainFM significantly outperforms 14 baseline models, achieving state-of-the-art performance in both single-atlas and cross-atlas settings, demonstrating exceptional scalability and atlas robustness.

Technology Category

Application Category

๐Ÿ“ Abstract
Brain network analysis provides an interpretable framework for characterizing brain organization and has been widely used for neurological disorder identification. Recent advances in self-supervised learning have motivated the development of brain network foundation models. However, existing approaches are often limited by atlas dependency, insufficient exploitation of multiple network views, and weak incorporation of anatomical priors. In this work, we propose MV-BrainFM, a multi-view brain network foundation model designed to learn generalizable and scalable representations from brain networks constructed with arbitrary atlases. MV-BrainFM explicitly incorporates anatomical distance information into Transformer-based modeling to guide inter-regional interactions, and introduces an unsupervised cross-view consistency learning strategy to align representations from multiple atlases of the same subject in a shared latent space. By jointly enforcing within-view robustness and cross-view alignment during pretraining, the model effectively captures complementary information across heterogeneous network views while remaining atlas-aware. In addition, MV-BrainFM adopts a unified multi-view pretraining paradigm that enables simultaneous learning from multiple datasets and atlases, significantly improving computational efficiency compared to conventional sequential training strategies. The proposed framework also demonstrates strong scalability, consistently benefiting from increasing data diversity while maintaining stable performance across unseen atlas configurations. Extensive experiments on more than 20K subjects from 17 fMRI datasets show that MV-BrainFM consistently outperforms 14 existing brain network foundation models and task-specific baselines under both single-atlas and multi-atlas settings.
Problem

Research questions and friction points this paper is trying to address.

atlas dependency
multi-view brain networks
anatomical priors
foundation model
cross-view consistency
Innovation

Methods, ideas, or system contributions that make the work stand out.

multi-view learning
cross-view consistency
anatomical priors
foundation model
atlas-invariant representation
๐Ÿ”Ž Similar Papers
No similar papers found.