🤖 AI Summary
To address contribution imbalance and fairness deficiencies arising from heterogeneous client resources in federated learning, this paper proposes FLamma, an incentive-compatible framework grounded in Stackelberg game theory. FLamma innovatively models server–client hierarchical interaction via an adaptive γ-decay mechanism and integrates a progressive influence-balancing strategy to enable rational client responses and collaborative optimization under non-IID data. The method dynamically adjusts local training rounds and aggregation weights to jointly optimize global convergence speed, model accuracy, and individual fairness. Experiments on both IID and non-IID benchmarks demonstrate that FLamma reduces the Fairness Gap by 37%, yields significantly more equitable accuracy distributions across clients, and outperforms mainstream baselines—including FedAvg—in both global model accuracy and convergence rate.
📝 Abstract
Federated Learning (FL) has gained prominence as a decentralized machine learning paradigm, allowing clients to collaboratively train a global model while preserving data privacy. Despite its potential, FL faces significant challenges in heterogeneous environments, where varying client resources and capabilities can undermine overall system performance. Existing approaches primarily focus on maximizing global model accuracy, often at the expense of unfairness among clients and suboptimal system efficiency, particularly in non-IID (non-Independent and Identically Distributed) settings. In this paper, we introduce FLamma, a novel Federated Learning framework based on adaptive gamma-based Stackelberg game, designed to address the aforementioned limitations and promote fairness. Our approach allows the server to act as the leader, dynamically adjusting a decay factor while clients, acting as followers, optimally select their number of local epochs to maximize their utility. Over time, the server incrementally balances client influence, initially rewarding higher-contributing clients and gradually leveling their impact, driving the system toward a Stackelberg Equilibrium. Extensive simulations on both IID and non-IID datasets show that our method significantly improves fairness in accuracy distribution without compromising overall model performance or convergence speed, outperforming traditional FL baselines.