🤖 AI Summary
In medical federated learning (FL), clients must rapidly adapt to novel disease diagnosis tasks but lack mechanisms for cross-task and cross-client knowledge reuse. Method: We propose a knowledge-enhanced adapter initialization framework that integrates global clustering with a bilevel optimization strategy: an upper-level optimizer dynamically learns inter-cluster aggregation weights to model cross-client knowledge distributions, while a lower-level optimizer refines intra-cluster personalized weights to enable cross-task transfer; adapters are thus hierarchically initialized to accelerate cold-start adaptation. The framework supports multimodal medical data and enables efficient large-model adaptation in FL settings. Contribution/Results: Evaluated on three benchmark medical datasets—dermatology, chest X-ray, and retinal OCT—our method achieves significantly higher diagnostic accuracy for unseen diseases compared to existing FL adapter approaches, demonstrating superior generalizability and practical utility in real-world clinical federated scenarios.
📝 Abstract
In healthcare, federated learning (FL) is a widely adopted framework that enables privacy-preserving collaboration among medical institutions. With large foundation models (FMs) demonstrating impressive capabilities, using FMs in FL through cost-efficient adapter tuning has become a popular approach. Given the rapidly evolving healthcare environment, it is crucial for individual clients to quickly adapt to new tasks or diseases by tuning adapters while drawing upon past experiences. In this work, we introduce Federated Knowledge-Enhanced Initialization (FedKEI), a novel framework that leverages cross-client and cross-task transfer from past knowledge to generate informed initializations for learning new tasks with adapters. FedKEI begins with a global clustering process at the server to generalize knowledge across tasks, followed by the optimization of aggregation weights across clusters (inter-cluster weights) and within each cluster (intra-cluster weights) to personalize knowledge transfer for each new task. To facilitate more effective learning of the inter- and intra-cluster weights, we adopt a bi-level optimization scheme that collaboratively learns the global intra-cluster weights across clients and optimizes the local inter-cluster weights toward each client's task objective. Extensive experiments on three benchmark datasets of different modalities, including dermatology, chest X-rays, and retinal OCT, demonstrate FedKEI's advantage in adapting to new diseases compared to state-of-the-art methods.