Towards Trustworthy Federated Learning with Untrusted Participants

📅 2025-05-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In federated learning without a trusted central server, achieving simultaneous privacy preservation, robust aggregation, and high model utility remains challenging. Method: This paper proposes CafCor, the first algorithm that unifies these objectives relying solely on pairwise private random seeds shared among users—without any trusted third party. CafCor integrates robust gradient aggregation, correlation-aware noise injection, and a pairwise stochastic coordination mechanism. Contribution/Results: Theoretically, CafCor achieves near-central differential privacy (DP) utility and collusion resistance, offering a provably strong privacy–utility trade-off—significantly outperforming local DP. Empirically, on standard benchmarks, CafCor attains high test accuracy, strong Byzantine robustness, and rigorous DP guarantees, with model utility approaching that of central DP.

Technology Category

Application Category

📝 Abstract
Resilience against malicious parties and data privacy are essential for trustworthy distributed learning, yet achieving both with good utility typically requires the strong assumption of a trusted central server. This paper shows that a significantly weaker assumption suffices: each pair of workers shares a randomness seed unknown to others. In a setting where malicious workers may collude with an untrusted server, we propose CafCor, an algorithm that integrates robust gradient aggregation with correlated noise injection, leveraging shared randomness between workers. We prove that CafCor achieves strong privacy-utility trade-offs, significantly outperforming local differential privacy (DP) methods, which do not make any trust assumption, while approaching central DP utility, where the server is fully trusted. Empirical results on standard benchmarks validate CafCor's practicality, showing that privacy and robustness can coexist in distributed systems without sacrificing utility or trusting the server.
Problem

Research questions and friction points this paper is trying to address.

Achieving trustworthy federated learning with untrusted participants
Balancing privacy and utility without a trusted central server
Combining robust gradient aggregation with correlated noise injection
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses shared randomness seeds between workers
Integrates robust gradient aggregation with noise
Achieves strong privacy-utility trade-offs without trust
🔎 Similar Papers
No similar papers found.