🤖 AI Summary
This paper addresses the challenges of privacy preservation, parameter efficiency, and system heterogeneity in collaborative fine-tuning of foundation models on private data. We propose FedLoRA—the first privacy-enhanced fine-tuning paradigm integrating federated learning (FL) with low-rank adaptation (LoRA). Methodologically, FedLoRA unifies LoRA parameter isolation, gradient compression, heterogeneous optimization, and personalized aggregation to achieve a distributed fine-tuning framework that jointly ensures convergence, communication efficiency, and differential privacy guarantees. Key contributions include: (1) establishing the first taxonomy for foundation model fine-tuning in FL settings; (2) systematically characterizing the trade-offs among privacy, efficiency, and generalization across technical approaches; and (3) mapping the complete research landscape, identifying critical open problems, and outlining evolutionary directions—thereby providing both theoretical foundations and practical guidelines for secure, efficient, and deployable federated fine-tuning of large language models.
📝 Abstract
Effectively leveraging private datasets remains a significant challenge in developing foundation models. Federated Learning (FL) has recently emerged as a collaborative framework that enables multiple users to fine-tune these models while mitigating data privacy risks. Meanwhile, Low-Rank Adaptation (LoRA) offers a resource-efficient alternative for fine-tuning foundation models by dramatically reducing the number of trainable parameters. This survey examines how LoRA has been integrated into federated fine-tuning for foundation models, an area we term FedLoRA, by focusing on three key challenges: distributed learning, heterogeneity, and efficiency. We further categorize existing work based on the specific methods used to address each challenge. Finally, we discuss open research questions and highlight promising directions for future investigation, outlining the next steps for advancing FedLoRA.