FRIDA: Free-Rider Detection using Privacy Attacks

📅 2024-10-07
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
In federated learning, free-riders—clients that abstain from local training yet exploit the global model—undermine collaboration fairness, impede convergence, and impose additional computational overhead on honest participants. To address this, we propose FRIDA, the first framework to repurpose privacy inference attacks (e.g., membership and attribute inference) for free-rider detection. FRIDA inversely infers individual data contribution by analyzing global model outputs, enabling explicit identification of non-contributory behavior. Crucially, it operates without access to local data or training internals, relying solely on model predictions. Evaluated under diverse non-IID data distributions, FRIDA significantly outperforms existing detection methods, achieving an average 12.7% improvement in identification accuracy. Moreover, it enhances system robustness against strategic evasion and promotes fairer resource allocation across participants.

Technology Category

Application Category

📝 Abstract
Federated learning is increasingly popular as it enables multiple parties with limited datasets and resources to train a high-performing machine learning model collaboratively. However, similarly to other collaborative systems, federated learning is vulnerable to free-riders -- participants who do not contribute to the training but still benefit from the shared model. Free-riders not only compromise the integrity of the learning process but also slow down the convergence of the global model, resulting in increased costs for the honest participants. To address this challenge, we propose FRIDA: free-rider detection using privacy attacks, a framework that leverages inference attacks to detect free-riders. Unlike traditional methods that only capture the implicit effects of free-riding, FRIDA directly infers details of the underlying training datasets, revealing characteristics that indicate free-rider behaviour. Through extensive experiments, we demonstrate that membership and property inference attacks are effective for this purpose. Our evaluation shows that FRIDA outperforms state-of-the-art methods, especially in non-IID settings.
Problem

Research questions and friction points this paper is trying to address.

Detecting free-riders in federated learning systems
Identifying non-contributing participants using privacy attacks
Preserving system integrity against exploitation in collaborative training
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses membership inference attacks
Employs property inference attacks
Directly infers genuine client training
🔎 Similar Papers
No similar papers found.