A Survey on Federated Fine-tuning of Large Language Models

📅 2025-03-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses critical challenges in privacy-preserving federated learning (FL) for fine-tuning large language models (LLMs), including excessive communication overhead, difficult privacy–utility trade-offs, and the absence of standardized evaluation protocols. To tackle these issues, we establish the first systematic analytical framework for FedLLM and propose a unified benchmark that jointly quantifies privacy guarantees, computational efficiency constraints, and cross-domain generalization capability. We conduct a comprehensive empirical assessment of parameter-efficient fine-tuning (PEFT) methods—including LoRA and Adapter—under realistic federated settings, identifying their applicability boundaries and practical limitations. Furthermore, we open-source a continuously updated GitHub repository integrating state-of-the-art algorithms, reproducible experimental configurations, and benchmark results. Our work provides foundational theoretical insights, actionable implementation guidelines, and an authoritative survey resource, thereby advancing the secure, privacy-compliant customization and deployment of LLMs in federated environments.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) have achieved remarkable success across a wide range of tasks, with fine-tuning playing a pivotal role in adapting them to specific downstream applications. Federated Learning (FL) offers a promising approach that enables collaborative model adaptation while ensuring data privacy, i.e., FedLLM. In this survey, we provide a systematic and thorough review of the integration of LLMs with FL. Specifically, we first trace the historical evolution of both LLMs and FL, while summarizing relevant prior surveys. We then present an in-depth analysis of the fundamental challenges encountered in deploying FedLLM. Following this, we conduct an extensive study of existing parameter-efficient fine-tuning (PEFT) methods and explore their applicability in FL. Furthermore, we introduce a comprehensive evaluation benchmark to rigorously assess FedLLM performance and discuss its diverse real-world applications across multiple domains. Finally, we identify critical open challenges and outline promising research directions to drive future advancements in FedLLM. We maintain an active href{https://github.com/Clin0212/Awesome-Federated-LLM-Learning}{GitHub repository} tracking cutting-edge advancements. This survey serves as a foundational resource for researchers and practitioners, offering insights into the evolving landscape of federated fine-tuning for LLMs while guiding future innovations in privacy-preserving AI.
Problem

Research questions and friction points this paper is trying to address.

Integrating Large Language Models with Federated Learning
Addressing challenges in Federated Fine-tuning of LLMs
Developing privacy-preserving AI applications using FedLLM
Innovation

Methods, ideas, or system contributions that make the work stand out.

Federated Learning for LLM fine-tuning
Parameter-efficient fine-tuning methods
Comprehensive evaluation benchmark for FedLLM
🔎 Similar Papers
Y
Yebo Wu
University of Macau, China, State Key Laboratory of IoTSC
Chunlin Tian
Chunlin Tian
University of Macau
MLSys
Jingguang Li
Jingguang Li
Dali University
individual differencesself-regulationnumerical cognition
H
He Sun
University of Macau, China, State Key Laboratory of IoTSC
K
Kahou Tam
University of Macau, China, State Key Laboratory of IoTSC
L
Li Li
University of Macau, China, State Key Laboratory of IoTSC
C
Chengzhong Xu
University of Macau, China, State Key Laboratory of IoTSC