🤖 AI Summary
Existing medical image calibration methods perform well on CT but exhibit poor generalizability to uncalibrated multi-protocol MRI (e.g., T1w/T2w/FLAIR), being highly sensitive to variations in contrast, resolution, and acquisition orientation. To address this, we propose BrainFM—the first modality-agnostic, multitask vision foundation model for brain imaging. BrainFM employs a “mild-to-severe” intra-subject image generation scheme and a “real-synthetic” hybrid training strategy, augmented by anatomical prior guidance, contextual modeling, and a unified multitask architecture, thereby substantially enhancing robustness to appearance variability. Without task-specific fine-tuning, BrainFM simultaneously supports cross-modal synthesis (MRI/CT), segmentation, and registration. Evaluated across 11 public benchmarks, it consistently outperforms state-of-the-art single-task models, demonstrates stable cross-protocol performance, and establishes a scalable foundation model paradigm for clinical heterogeneous neuroimaging analysis.
📝 Abstract
Recent learning-based approaches have made astonishing advances in calibrated medical imaging like computerized tomography (CT), yet they struggle to generalize in uncalibrated modalities -- notably magnetic resonance (MR) imaging, where performance is highly sensitive to the differences in MR contrast, resolution, and orientation. This prevents broad applicability to diverse real-world clinical protocols. Here we introduce BrainFM, a modality-agnostic, multi-task vision foundation model for human brain imaging. With the proposed "mild-to-severe" intra-subject generation and "real-synth" mix-up training strategy, BrainFM is resilient to the appearance of acquired images (e.g., modality, contrast, deformation, resolution, artifacts), and can be directly applied to five fundamental brain imaging tasks, including image synthesis for CT and T1w/T2w/FLAIR MRI, anatomy segmentation, scalp-to-cortical distance, bias field estimation, and registration. We evaluate the efficacy of BrainFM on eleven public datasets, and demonstrate its robustness and effectiveness across all tasks and input modalities. Code is available at https://github.com/jhuldr/BrainFM.