🤖 AI Summary
This work proposes the first atlas-free, voxel-level foundation model for fMRI analysis, addressing the limitations of existing approaches that rely on predefined brain parcellation atlases and consequently lose fine-grained voxel-level details while introducing anatomical bias. By leveraging self-supervised learning, the model directly learns representations from raw voxel signals without atlas-based constraints. A novel dynamic chunking mechanism is introduced to preserve spatial structural integrity while enabling efficient large-scale pretraining. The study establishes a standardized benchmark comprising 11 diverse datasets, encompassing both resting-state and task-based fMRI tasks. Experimental results demonstrate that the proposed method consistently outperforms current state-of-the-art models across all evaluated tasks, highlighting its superior generalization capability and scalability.
📝 Abstract
Self-supervised fMRI foundation models have shown promising transfer performance, yet most rely on predefined region-level parcellations that discard fine-grained voxel information and introduce atlas-dependent biases. We propose Omni-fMRI, an atlas-free foundation model that operates directly on voxel-level signals. To enable scalable pretraining on 49,497 fMRI sessions across nine datasets, Omni-fMRI introduces a dynamic patching mechanism that substantially reduces computational cost while preserving informative spatial structure. To support reproducibility and fair comparison, we establish a comprehensive benchmark suite spanning 11 datasets and a diverse set of resting-state and task-based fMRI tasks. Experimental results demonstrate that Omni-fMRI consistently outperforms existing foundation models, providing a scalable and reproducible framework for atlas-free brain representation learning. Code and logs are available.