🤖 AI Summary
This work addresses the challenge in federated LoRA fine-tuning where clients inject differential privacy noise of varying magnitudes due to heterogeneous privacy requirements, leading to misalignment between local contributions and the global objective. To resolve this, the authors propose WinFLoRA, which introduces aggregation weights as an incentive mechanism by dynamically estimating the noise level in each client’s uploaded LoRA adapter and adjusting its weight accordingly. This prioritizes the integration of low-noise, high-quality updates, achieving utility alignment in privacy-heterogeneous settings without third-party coordination. Experimental results demonstrate that WinFLoRA improves global accuracy by up to 52.58% across multiple large language models and datasets, with client utility reaching 2.56 times that of baseline methods.
📝 Abstract
Large Language Models (LLMs) increasingly underpin intelligent web applications, from chatbots to search and recommendation, where efficient specialization is essential. Low-Rank Adaptation (LoRA) enables such adaptation with minimal overhead, while federated LoRA allows web service providers to fine-tune shared models without data sharing. However, in privacy-sensitive deployments, clients inject varying levels of differential privacy (DP) noise, creating privacy heterogeneity that misaligns individual incentives and global performance. In this paper, we propose WinFLoRA, a privacy-heterogeneous federated LoRA that utilizes aggregation weights as incentives with noise awareness. Specifically, the noises from clients are estimated based on the uploaded LoRA adapters. A larger weight indicates greater influence on the global model and better downstream task performance, rewarding lower-noise contributions. By up-weighting low-noise updates, WinFLoRA improves global accuracy while accommodating clients'heterogeneous privacy requirements. Consequently, WinFLoRA aligns heterogeneous client utility in terms of privacy and downstream performance with global model objectives without third-party involvement. Extensive evaluations demonstrate that across multiple LLMs and datasets, WinFLoRA achieves up to 52.58% higher global accuracy and up to 2.56x client utility than state-of-the-art benchmarks. Source code is publicly available at https://github.com/koums24/WinFLoRA.git.