WinFLoRA: Incentivizing Client-Adaptive Aggregation in Federated LoRA under Privacy Heterogeneity

📅 2026-02-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge in federated LoRA fine-tuning where clients inject differential privacy noise of varying magnitudes due to heterogeneous privacy requirements, leading to misalignment between local contributions and the global objective. To resolve this, the authors propose WinFLoRA, which introduces aggregation weights as an incentive mechanism by dynamically estimating the noise level in each client’s uploaded LoRA adapter and adjusting its weight accordingly. This prioritizes the integration of low-noise, high-quality updates, achieving utility alignment in privacy-heterogeneous settings without third-party coordination. Experimental results demonstrate that WinFLoRA improves global accuracy by up to 52.58% across multiple large language models and datasets, with client utility reaching 2.56 times that of baseline methods.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) increasingly underpin intelligent web applications, from chatbots to search and recommendation, where efficient specialization is essential. Low-Rank Adaptation (LoRA) enables such adaptation with minimal overhead, while federated LoRA allows web service providers to fine-tune shared models without data sharing. However, in privacy-sensitive deployments, clients inject varying levels of differential privacy (DP) noise, creating privacy heterogeneity that misaligns individual incentives and global performance. In this paper, we propose WinFLoRA, a privacy-heterogeneous federated LoRA that utilizes aggregation weights as incentives with noise awareness. Specifically, the noises from clients are estimated based on the uploaded LoRA adapters. A larger weight indicates greater influence on the global model and better downstream task performance, rewarding lower-noise contributions. By up-weighting low-noise updates, WinFLoRA improves global accuracy while accommodating clients'heterogeneous privacy requirements. Consequently, WinFLoRA aligns heterogeneous client utility in terms of privacy and downstream performance with global model objectives without third-party involvement. Extensive evaluations demonstrate that across multiple LLMs and datasets, WinFLoRA achieves up to 52.58% higher global accuracy and up to 2.56x client utility than state-of-the-art benchmarks. Source code is publicly available at https://github.com/koums24/WinFLoRA.git.
Problem

Research questions and friction points this paper is trying to address.

Federated Learning
LoRA
Privacy Heterogeneity
Differential Privacy
Client Incentives
Innovation

Methods, ideas, or system contributions that make the work stand out.

Federated LoRA
Privacy Heterogeneity
Client-Adaptive Aggregation
Differential Privacy
Incentive Mechanism
🔎 Similar Papers
No similar papers found.
M
Mengsha Kou
RMIT University
Xiaoyu Xia
Xiaoyu Xia
School of Computing Technologies, RMIT University
Parallel and Distributed ComputingSystem SecurityEdge ComputingSustainable Computing
Z
Ziqi Wang
RMIT University
Ibrahim Khalil
Ibrahim Khalil
Professor, School of Computing Technologies, STEM College, RMIT University
PrivacyBlockchainIndustry 4.0e-healthm-health
R
Runkun Luo
Huazhong University of Science and Technology
J
Jingwen Zhou
CSIRO’s Data61
M
Minhui Xue
CSIRO’s Data61 and Responsible AI Research (RAIR) Centre