Incentivize Contribution and Learn Parameters Too: Federated Learning with Strategic Data Owners

๐Ÿ“… 2025-05-17
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
In federated learning, rational clients often withhold full data contributions due to high data acquisition and computational costs. To address this, we propose the first dual-mechanism framework jointly ensuring incentive compatibility and model convergence. First, at the mechanism-design level, we introduce a budget-balanced monetary transfer scheme that guarantees full-data contribution in Nash equilibrium. Second, at the learning level, we ensure global model parameters converge to the optimal solution. Integrating game-theoretic analysis, mechanism design, and distributed optimization, we evaluate our method on real-world federated datasetsโ€”CIFAR-10, FeMNIST, and Twitter. Results demonstrate rapid convergence, significant improvements in both individual model accuracy and social welfare, while simultaneously satisfying individual rationality, incentive compatibility, and budget balance.

Technology Category

Application Category

๐Ÿ“ Abstract
Classical federated learning (FL) assumes that the clients have a limited amount of noisy data with which they voluntarily participate and contribute towards learning a global, more accurate model in a principled manner. The learning happens in a distributed fashion without sharing the data with the center. However, these methods do not consider the incentive of an agent for participating and contributing to the process, given that data collection and running a distributed algorithm is costly for the clients. The question of rationality of contribution has been asked recently in the literature and some results exist that consider this problem. This paper addresses the question of simultaneous parameter learning and incentivizing contribution, which distinguishes it from the extant literature. Our first mechanism incentivizes each client to contribute to the FL process at a Nash equilibrium and simultaneously learn the model parameters. However, this equilibrium outcome can be away from the optimal, where clients contribute with their full data and the algorithm learns the optimal parameters. We propose a second mechanism with monetary transfers that is budget balanced and enables the full data contribution along with optimal parameter learning. Large scale experiments with real (federated) datasets (CIFAR-10, FeMNIST, and Twitter) show that these algorithms converge quite fast in practice, yield good welfare guarantees, and better model performance for all agents.
Problem

Research questions and friction points this paper is trying to address.

Incentivize strategic clients in federated learning participation
Simultaneously learn model parameters and ensure client contributions
Achieve optimal data contribution and parameter learning via mechanisms
Innovation

Methods, ideas, or system contributions that make the work stand out.

Incentivizes client contribution via Nash equilibrium
Uses monetary transfers for full data contribution
Ensures optimal parameter learning in federated learning
๐Ÿ”Ž Similar Papers
No similar papers found.
D
Drashthi Doshi
IIT Bombay, Mumbai, India
A
Aditya Vema Reddy Kesari
IIT Bombay, Mumbai, India
Swaprava Nath
Swaprava Nath
Associate Professor, CSE, IIT Bombay
Game TheoryMechanism DesignArtificial IntelligenceOptimizationOperations Research
A
Avishek Ghosh
IIT Bombay, Mumbai, India
S
Suhas S Kowshik
Microsoft, Bangalore, India