Incentive-Compatible Federated Learning with Stackelberg Game Modeling

📅 2025-01-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address contribution imbalance and fairness deficiencies arising from heterogeneous client resources in federated learning, this paper proposes FLamma, an incentive-compatible framework grounded in Stackelberg game theory. FLamma innovatively models server–client hierarchical interaction via an adaptive γ-decay mechanism and integrates a progressive influence-balancing strategy to enable rational client responses and collaborative optimization under non-IID data. The method dynamically adjusts local training rounds and aggregation weights to jointly optimize global convergence speed, model accuracy, and individual fairness. Experiments on both IID and non-IID benchmarks demonstrate that FLamma reduces the Fairness Gap by 37%, yields significantly more equitable accuracy distributions across clients, and outperforms mainstream baselines—including FedAvg—in both global model accuracy and convergence rate.

Technology Category

Application Category

📝 Abstract
Federated Learning (FL) has gained prominence as a decentralized machine learning paradigm, allowing clients to collaboratively train a global model while preserving data privacy. Despite its potential, FL faces significant challenges in heterogeneous environments, where varying client resources and capabilities can undermine overall system performance. Existing approaches primarily focus on maximizing global model accuracy, often at the expense of unfairness among clients and suboptimal system efficiency, particularly in non-IID (non-Independent and Identically Distributed) settings. In this paper, we introduce FLamma, a novel Federated Learning framework based on adaptive gamma-based Stackelberg game, designed to address the aforementioned limitations and promote fairness. Our approach allows the server to act as the leader, dynamically adjusting a decay factor while clients, acting as followers, optimally select their number of local epochs to maximize their utility. Over time, the server incrementally balances client influence, initially rewarding higher-contributing clients and gradually leveling their impact, driving the system toward a Stackelberg Equilibrium. Extensive simulations on both IID and non-IID datasets show that our method significantly improves fairness in accuracy distribution without compromising overall model performance or convergence speed, outperforming traditional FL baselines.
Problem

Research questions and friction points this paper is trying to address.

Federated Learning
Resource Allocation
Data Imbalance
Innovation

Methods, ideas, or system contributions that make the work stand out.

FLamma method
Stackelberg game model
federated learning optimization
🔎 Similar Papers
No similar papers found.
Simin Javaherian
Simin Javaherian
PhD student at the University of Lousiana
Federated LearningMachine LearningGame Theory
Bryce Turney
Bryce Turney
Research Assistant, University of Louisiana at Lafayette
Machine LearningDeep LearningHardware DesignAutomationComputer Vision
L
Li Chen
School of Computing and Informatics, University of Louisiana at Lafayette
N
Nianfeng Tzeng
School of Computing and Informatics, University of Louisiana at Lafayette