FedRandom: Sampling Consistent and Accurate Contribution Values in Federated Learning

📅 2026-02-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the high instability in participant contribution estimation in federated learning, which undermines incentive fairness and malicious client detection due to insufficient sampling. For the first time, this instability is formally modeled as a sampling problem in statistical estimation. The authors propose FedRandom, a novel method that integrates random sampling with a multi-round aggregation mechanism within the federated learning framework to significantly enhance the consistency and accuracy of contribution estimates. Experimental results on benchmark datasets including CIFAR-10 and MNIST demonstrate that FedRandom reduces the distance between estimated and true contributions by more than 33% in over half of the evaluated scenarios, while achieving markedly improved estimation stability in over 90% of cases.

Technology Category

Application Category

📝 Abstract
Federated Learning is a privacy-preserving decentralized approach for Machine Learning tasks. In industry deployments characterized by a limited number of entities possessing abundant data, the significance of a participant's role in shaping the global model becomes pivotal given that participation in a federation incurs costs, and participants may expect compensation for their involvement. Additionally, the contributions of participants serve as a crucial means to identify and address potential malicious actors and free-riders. However, fairly assessing individual contributions remains a significant hurdle. Recent works have demonstrated a considerable inherent instability in contribution estimations across aggregation strategies. While employing a different strategy may offer convergence benefits, this instability can have potentially harming effects on the willingness of participants in engaging in the federation. In this work, we introduce FedRandom, a novel mitigation technique to the contribution instability problem. Tackling the instability as a statistical estimation problem, FedRandom allows us to generate more samples than when using regular FL strategies. We show that these additional samples provide a more consistent and reliable evaluation of participant contributions. We demonstrate our approach using different data distributions across CIFAR-10, MNIST, CIFAR-100 and FMNIST and show that FedRandom reduces the overall distance to the ground truth by more than a third in half of all evaluated scenarios, and improves stability in more than 90% of cases.
Problem

Research questions and friction points this paper is trying to address.

Federated Learning
Contribution Estimation
Instability
Fairness
Participant Incentive
Innovation

Methods, ideas, or system contributions that make the work stand out.

Federated Learning
Contribution Estimation
Statistical Sampling
FedRandom
Stability
🔎 Similar Papers
No similar papers found.