Personalized Federated Fine-Tuning of Vision Foundation Models for Healthcare

📅 2025-10-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Medical vision foundation models face challenges in privacy-constrained settings: insufficient labeled data and difficulty balancing generalizability with client-specific adaptation. To address this, we propose an orthogonal LoRA-based personalized fine-tuning framework for federated learning. Our method introduces orthogonality-constrained low-rank adapters to explicitly decouple globally shared knowledge from client-specific representations, thereby mitigating cross-client knowledge interference and enhancing local adaptation capability. Extensive experiments across multiple real-world medical imaging tasks demonstrate that our approach achieves performance on par with or superior to state-of-the-art federated fine-tuning methods—while strictly preserving data privacy—and significantly outperforms non-personalized baselines. This work establishes a scalable, high-accuracy, and privacy-preserving paradigm for multi-center collaborative modeling in healthcare.

Technology Category

Application Category

📝 Abstract
Foundation models open up new possibilities for the use of AI in healthcare. However, even when pre-trained on health data, they still need to be fine-tuned for specific downstream tasks. Furthermore, although foundation models reduce the amount of training data required to achieve good performance, obtaining sufficient data is still a challenge. This is due, in part, to restrictions on sharing and aggregating data from different sources to protect patients'privacy. One possible solution to this is to fine-tune foundation models via federated learning across multiple participating clients (i.e., hospitals, clinics, etc.). In this work, we propose a new personalized federated fine-tuning method that learns orthogonal LoRA adapters to disentangle general and client-specific knowledge, enabling each client to fully exploit both their own data and the data of others. Our preliminary results on real-world federated medical imaging tasks demonstrate that our approach is competitive against current federated fine-tuning methods.
Problem

Research questions and friction points this paper is trying to address.

Fine-tuning vision foundation models for healthcare tasks
Addressing data scarcity while preserving patient privacy
Developing personalized federated learning across multiple healthcare institutions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Personalized federated fine-tuning for vision foundation models
Orthogonal LoRA adapters disentangle general and client-specific knowledge
Enables clients to exploit both local and external data
🔎 Similar Papers
No similar papers found.
A
Adam Tupper
Institut intelligence et données (IID), Université Laval, Mila - Quebec AI Institute
Christian Gagné
Christian Gagné
Professor at Université Laval - IID - LVSN - CeRVIM - CRDM - CIFAR - Mila
machine learningdeep learningevolutionary computation