FedQuad: Adaptive Layer-wise LoRA Deployment and Activation Quantization for Federated Fine-Tuning

📅 2025-06-01
📈 Citations: 0
✨ Influential: 0
📄 PDF
🤖 AI Summary
To address the bottlenecks of high LoRA memory overhead, computationally intensive full backpropagation, and synchronization difficulties among resource-constrained and heterogeneous edge devices in Federated Fine-Tuning (FedFT), this paper proposes a device-adaptive, layered LoRA depth selection and activation quantization co-optimization framework. The method integrates Low-Rank Adaptation (LoRA), layer-wise activation quantization, resource-aware configuration search, and a greedy scheduling strategy, enabling heterogeneous clients to dynamically select optimal LoRA layer counts and quantization bit-widths based on local resource constraints. Extensive experiments under multi-device heterogeneous settings demonstrate that the proposed approach achieves 1.4×–5.3× faster convergence compared to baseline FedFT methods, significantly improving both deployment efficiency and practicality of privacy-preserving large-model FedFT on edge devices.

Technology Category

Application Category

📝 Abstract
Federated fine-tuning (FedFT) provides an effective paradigm for fine-tuning large language models (LLMs) in privacy-sensitive scenarios. However, practical deployment remains challenging due to the limited resources on end devices. Existing methods typically utilize parameter-efficient fine-tuning (PEFT) techniques, such as Low-Rank Adaptation (LoRA), to substantially reduce communication overhead. Nevertheless, significant memory usage for activation storage and computational demands from full backpropagation remain major barriers to efficient deployment on resource-constrained end devices. Moreover, substantial resource heterogeneity across devices results in severe synchronization bottlenecks, diminishing the overall fine-tuning efficiency. To address these issues, we propose FedQuad, a novel LoRA-based FedFT framework that adaptively adjusts the LoRA depth (the number of consecutive tunable LoRA layers from the output) according to device computational capabilities, while employing activation quantization to reduce memory overhead, thereby enabling efficient deployment on resource-constrained devices. Specifically, FedQuad first identifies the feasible and efficient combinations of LoRA depth and the number of activation quantization layers based on device-specific resource constraints. Subsequently, FedQuad employs a greedy strategy to select the optimal configurations for each device, effectively accommodating system heterogeneity. Extensive experiments demonstrate that FedQuad achieves a 1.4-5.3x convergence acceleration compared to state-of-the-art baselines when reaching target accuracy, highlighting its efficiency and deployability in resource-constrained and heterogeneous end-device environments.
Problem

Research questions and friction points this paper is trying to address.

Reduces memory usage for activation storage in federated fine-tuning
Addresses computational demands from full backpropagation in LLMs
Mitigates resource heterogeneity across devices in FedFT
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adaptive LoRA depth adjustment per device
Activation quantization reduces memory usage
Greedy strategy optimizes device configurations
🔎 Similar Papers
No similar papers found.
R
Rukuo Li
School of Computer Science and Technology, University of Science and Technology of China, Hefei, Anhui, China, 230027, and also with Suzhou Institute for Advanced Research, University of Science and Technology of China, Suzhou, Jiangsu, China, 215123
Jianchun Liu
Jianchun Liu
University of Science and Technology of China
Edge ComputingFederated LearningModel Inference
Hongli Xu
Hongli Xu
University of Science and Technology of China
Software Defined NetworkCooperative CommunicationSensor Networks
Liusheng Huang
Liusheng Huang
Professor of Computer Science, University of Science and Technology of China
无线网络、信息安全