Private and Robust Contribution Evaluation in Federated Learning

📅 2026-02-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge in cross-institutional federated learning where secure aggregation, while preserving privacy, hinders fair and robust evaluation of participants’ contributions—thereby undermining incentive mechanisms and malicious client detection. To reconcile privacy with contribution assessment, the paper proposes two marginal contribution scoring methods compatible with secure aggregation: Fair-Private, which satisfies classical fairness axioms, and Everybody-Else, which enhances robustness against manipulation by eliminating self-evaluation. Both methods operate without exposing raw model updates, achieving the first co-design of privacy preservation and contribution evaluation. Built upon a secure aggregation framework and leveraging marginal difference scoring with Shapley value approximation, extensive experiments on multiple medical imaging datasets and CIFAR10 demonstrate that the proposed approaches significantly outperform existing baselines, yielding more accurate Shapley-based rankings, improved model performance, and enhanced detection of anomalous participants.

Technology Category

Application Category

📝 Abstract
Cross-silo federated learning allows multiple organizations to collaboratively train machine learning models without sharing raw data, but client updates can still leak sensitive information through inference attacks. Secure aggregation protects privacy by hiding individual updates, yet it complicates contribution evaluation, which is critical for fair rewards and detecting low-quality or malicious participants. Existing marginal-contribution methods, such as the Shapley value, are incompatible with secure aggregation, and practical alternatives, such as Leave-One-Out, are crude and rely on self-evaluation. We introduce two marginal-difference contribution scores compatible with secure aggregation. Fair-Private satisfies standard fairness axioms, while Everybody-Else eliminates self-evaluation and provides resistance to manipulation, addressing a largely overlooked vulnerability. We provide theoretical guarantees for fairness, privacy, robustness, and computational efficiency, and evaluate our methods on multiple medical image datasets and CIFAR10 in cross-silo settings. Our scores consistently outperform existing baselines, better approximate Shapley-induced client rankings, and improve downstream model performance as well as misbehavior detection. These results demonstrate that fairness, privacy, robustness, and practical utility can be achieved jointly in federated contribution evaluation, offering a principled solution for real-world cross-silo deployments.
Problem

Research questions and friction points this paper is trying to address.

Federated Learning
Contribution Evaluation
Secure Aggregation
Privacy
Fairness
Innovation

Methods, ideas, or system contributions that make the work stand out.

Secure Aggregation
Contribution Evaluation
Federated Learning
Shapley Value
Robustness
🔎 Similar Papers
No similar papers found.
D
Delio Jaramillo Velez
University of La Laguna, Tenerife, Spain
G
Gergely Biczok
HUN-REN Hungarian Research Network, Budapest, Hungary
Alexandre Graell i Amat
Alexandre Graell i Amat
Professor, Chalmers University of Technology
Coding TheoryCommunications Theory
J
Johan Ostman
AI Sweden, Gothenburg, Sweden
B
Balazs Pejo
Budapest University of Technology and Economics, Budapest, Hungary