🤖 AI Summary
Medical vision foundation models face challenges in privacy-constrained settings: insufficient labeled data and difficulty balancing generalizability with client-specific adaptation. To address this, we propose an orthogonal LoRA-based personalized fine-tuning framework for federated learning. Our method introduces orthogonality-constrained low-rank adapters to explicitly decouple globally shared knowledge from client-specific representations, thereby mitigating cross-client knowledge interference and enhancing local adaptation capability. Extensive experiments across multiple real-world medical imaging tasks demonstrate that our approach achieves performance on par with or superior to state-of-the-art federated fine-tuning methods—while strictly preserving data privacy—and significantly outperforms non-personalized baselines. This work establishes a scalable, high-accuracy, and privacy-preserving paradigm for multi-center collaborative modeling in healthcare.
📝 Abstract
Foundation models open up new possibilities for the use of AI in healthcare. However, even when pre-trained on health data, they still need to be fine-tuned for specific downstream tasks. Furthermore, although foundation models reduce the amount of training data required to achieve good performance, obtaining sufficient data is still a challenge. This is due, in part, to restrictions on sharing and aggregating data from different sources to protect patients'privacy. One possible solution to this is to fine-tune foundation models via federated learning across multiple participating clients (i.e., hospitals, clinics, etc.). In this work, we propose a new personalized federated fine-tuning method that learns orthogonal LoRA adapters to disentangle general and client-specific knowledge, enabling each client to fully exploit both their own data and the data of others. Our preliminary results on real-world federated medical imaging tasks demonstrate that our approach is competitive against current federated fine-tuning methods.