🤖 AI Summary
To address three key challenges in federated LoRA fine-tuning—high communication overhead, aggregation misalignment, and performance degradation under differential privacy (DP) due to noise amplification—this paper proposes Fed-SB. Its core is LoRA-SB: a novel LoRA adapter parameterization using small square matrices ( R ), enabling exact adapter alignment and lossless aggregation. We introduce the first independent averaging mechanism for ( R ), decoupling per-round communication cost from the number of clients. In private federated settings, this design simultaneously reduces required DP noise magnitude and eliminates noise accumulation across rounds. Experiments demonstrate that Fed-SB achieves state-of-the-art performance on commonsense reasoning, arithmetic reasoning, and natural language inference tasks. It reduces communication costs by up to 230×, significantly improves accuracy and privacy-budget utilization under DP constraints, and reshapes the communication–performance Pareto frontier.
📝 Abstract
Low-Rank Adaptation (LoRA) has become ubiquitous for efficiently fine-tuning foundation models. However, federated fine-tuning using LoRA is challenging due to suboptimal updates arising from traditional federated averaging of individual adapters. Existing solutions either incur prohibitively high communication cost that scales linearly with the number of clients or suffer from performance degradation due to limited expressivity. We introduce Federated Silver Bullet (Fed-SB), a novel approach for federated fine-tuning of LLMs using LoRA-SB, a recently proposed low-rank adaptation method. LoRA-SB optimally aligns the optimization trajectory with the ideal low-rank full fine-tuning projection by learning a small square matrix (R) between adapters B and A, keeping other components fixed. Direct averaging of R guarantees exact updates, substantially reducing communication cost, which remains independent of the number of clients, and enables scalability. Fed-SB achieves state-of-the-art performance across commonsense reasoning, arithmetic reasoning, and language inference tasks while reducing communication costs by up to 230x. In private settings, Fed-SB further improves performance by (1) reducing trainable parameters, thereby lowering the noise required for differential privacy and (2) avoiding noise amplification introduced by other methods. Overall, Fed-SB establishes a new Pareto frontier in the tradeoff between communication and performance, offering an efficient and scalable solution for both private and non-private federated fine-tuning. Our code is publicly available at https://github.com/CERT-Lab/fed-sb.