Strategic Incentivization for Locally Differentially Private Federated Learning

📅 2025-08-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In federated learning, local differential privacy (LDP) safeguards client data privacy but degrades global model accuracy due to gradient perturbation, posing a fundamental privacy–utility trade-off. This work is the first to formalize this trade-off as a strategic game between clients and the server. We propose a dynamic token-based incentive mechanism wherein the server allocates tokens to clients according to their contribution quality—measured by gradient utility—thereby encouraging clients to adaptively reduce LDP noise magnitude. Our approach integrates LDP-compliant gradient perturbation, game-theoretic modeling of client-server interactions, and token-driven access control for privacy-utility coordination. Experiments demonstrate that, under strict LDP guarantees (ε ≤ 2), our mechanism accelerates model convergence by 37% and improves final test accuracy by 4.2–6.8 percentage points, achieving dynamic co-optimization of privacy preservation and model utility.

Technology Category

Application Category

📝 Abstract
In Federated Learning (FL), multiple clients jointly train a machine learning model by sharing gradient information, instead of raw data, with a server over multiple rounds. To address the possibility of information leakage in spite of sharing only the gradients, Local Differential Privacy (LDP) is often used. In LDP, clients add a selective amount of noise to the gradients before sending the same to the server. Although such noise addition protects the privacy of clients, it leads to a degradation in global model accuracy. In this paper, we model this privacy-accuracy trade-off as a game, where the sever incentivizes the clients to add a lower degree of noise for achieving higher accuracy, while the clients attempt to preserve their privacy at the cost of a potential loss in accuracy. A token based incentivization mechanism is introduced in which the quantum of tokens credited to a client in an FL round is a function of the degree of perturbation of its gradients. The client can later access a newly updated global model only after acquiring enough tokens, which are to be deducted from its balance. We identify the players, their actions and payoff, and perform a strategic analysis of the game. Extensive experiments were carried out to study the impact of different parameters.
Problem

Research questions and friction points this paper is trying to address.

Balancing privacy and accuracy in federated learning with LDP
Incentivizing clients to reduce noise for better model performance
Strategic token-based mechanism for privacy-accuracy trade-off management
Innovation

Methods, ideas, or system contributions that make the work stand out.

Token-based incentivization for LDP in FL
Game theory models privacy-accuracy trade-off
Clients earn tokens for lower noise gradients
🔎 Similar Papers
No similar papers found.