🤖 AI Summary
This work addresses the challenge of fairness in federated learning under heterogeneous and intermittent client participation, where existing methods often overlook disparities in participation opportunities, leading to persistent underrepresentation of intermittently available clients. To mitigate this, the authors propose a cumulative utility fairness principle that introduces an availability-normalized cumulative utility metric, decoupling physical availability constraints from scheduling bias and instead focusing on the long-term utility each client accrues per participation opportunity. Building on this principle, they design a novel fairness-aware objective function and aggregation mechanism, explicitly accounting for non-IID temporal data distributions. Experimental results demonstrate that the proposed approach significantly improves long-term representational fairness on temporally skewed, non-IID federated benchmarks while maintaining model performance close to the optimal.
📝 Abstract
In real-world federated learning (FL) systems, client participation is intermittent, heterogeneous, and often correlated with data characteristics or resource constraints. Existing fairness approaches in FL primarily focus on equalizing loss or accuracy conditional on participation, implicitly assuming that clients have comparable opportunities to contribute over time. However, when participation itself is uneven, these objectives can lead to systematic under-representation of intermittently available clients, even if per-round performance appears fair. We propose cumulative utility parity, a fairness principle that evaluates whether clients receive comparable long-term benefit per participation opportunity, rather than per training round. To operationalize this notion, we introduce availability-normalized cumulative utility, which disentangles unavoidable physical constraints from avoidable algorithmic bias arising from scheduling and aggregation. Experiments on temporally skewed, non-IID federated benchmarks demonstrate that our approach substantially improves long-term representation parity, while maintaining near-perfect performance.