Generative Autoregressive Transformers for Model-Agnostic Federated MRI Reconstruction

📅 2025-02-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing federated MRI reconstruction methods assume homogeneous model architectures across participating sites, limiting adaptability to heterogeneous computational resources and clinical requirements. To address this, we propose the first privacy-preserving federated learning framework supporting architectural heterogeneity. Our approach comprises three key components: (1) a model-agnostic generative federated paradigm; (2) a site-specific prompt-guided autoregressive Transformer (GAT) prior enabling controllable, cross-center image synthesis; and (3) a synergistic training mechanism integrating federated knowledge distillation with heterogeneous model coordination. Evaluated on a multi-center MRI dataset, our method significantly outperforms state-of-the-art federated baselines—maintaining high local reconstruction fidelity while substantially improving cross-site generalization. This work establishes a new paradigm for flexible, secure, and clinically adaptable federated medical imaging collaboration.

Technology Category

Application Category

📝 Abstract
Although learning-based models hold great promise for MRI reconstruction, single-site models built on limited local datasets often suffer from poor generalization. This challenge has spurred interest in collaborative model training on multi-site datasets via federated learning (FL) -- a privacy-preserving framework that aggregates model updates instead of sharing imaging data. Conventional FL builds a global model by aggregating locally trained model weights, inherently constraining all sites to a homogeneous model architecture. This rigid homogeneity requirement forces sites to forgo architectures tailored to their compute infrastructure and application-specific demands. Consequently, existing FL methods for MRI reconstruction fail to support model-heterogeneous settings, where individual sites are allowed to use distinct architectures. To overcome this fundamental limitation, here we introduce FedGAT, a novel model-agnostic FL technique based on generative autoregressive transformers. FedGAT decentralizes the training of a global generative prior that captures the distribution of multi-site MR images. For enhanced fidelity, we propose a novel site-prompted GAT prior that controllably synthesizes MR images from desired sites via autoregressive prediction across spatial scales. Each site then trains its site-specific reconstruction model -- using its preferred architecture -- on a hybrid dataset comprising the local MRI dataset and GAT-generated synthetic MRI datasets for other sites. Comprehensive experiments on multi-institutional datasets demonstrate that FedGAT supports flexible collaborations while enjoying superior within-site and across-site reconstruction performance compared to state-of-the-art FL baselines.
Problem

Research questions and friction points this paper is trying to address.

MRI reconstruction generalization
Model-heterogeneous federated learning
Generative autoregressive transformers
Innovation

Methods, ideas, or system contributions that make the work stand out.

Generative autoregressive transformers for MRI
Model-agnostic federated learning technique
Site-prompted GAT prior synthesis
🔎 Similar Papers
No similar papers found.
V
V. A. Nezhad
Department of Electrical and Electronics Engineering, and the National Magnetic Resonance Research Center, Bilkent University, Ankara, Turkey
G
Gokberk Elmas
Department of Electrical and Electronics Engineering, and the National Magnetic Resonance Research Center, Bilkent University, Ankara, Turkey
Bilal Kabas
Bilal Kabas
M.Sc. Student, Bilkent University
Machine LearningDeep LearningComputer VisionMedical Imaging
Fuat Arslan
Fuat Arslan
M.S. Bilkent University
Machine LearningMedical Imaging
T
Tolga C¸ ukur
Department of Electrical and Electronics Engineering, and the National Magnetic Resonance Research Center, Bilkent University, Ankara, Turkey