Confidential Federated Computations

๐Ÿ“… 2024-04-16
๐Ÿ›๏ธ arXiv.org
๐Ÿ“ˆ Citations: 1
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Existing federated learning systems exhibit three critical privacy shortcomings: differential privacy (DP) degrades model utility via excessive noise injection; secure multi-party computation (SMPC) suffers from poor scalability and vulnerability to Sybil attacks; and neither approach robustly mitigates malicious server behavior. This paper proposes a novel federated architecture integrating trusted execution environments (TEEs) with open-source collaborative verification. It leverages TEEs to ensure server-side computational confidentiality and integrity; combines open-source code auditing, secure aggregation, and lightweight DP within the TEE to achieve verifiable privacyโ€“utility trade-offs; and introduces, for the first time, an externally verifiable mechanism enabling massive device onboarding while fundamentally preventing Sybil attacks and server-side tampering. Experiments demonstrate that the framework achieves strong privacy guarantees (ฮต โ‰ค 2) while improving model accuracy by 12โ€“18%, and supports end-to-end auditability and regulatory compliance via third-party verification.

Technology Category

Application Category

๐Ÿ“ Abstract
Federated Learning and Analytics (FLA) have seen widespread adoption by technology platforms for processing sensitive on-device data. However, basic FLA systems have privacy limitations: they do not necessarily require anonymization mechanisms like differential privacy (DP), and provide limited protections against a potentially malicious service provider. Adding DP to a basic FLA system currently requires either adding excessive noise to each device's updates, or assuming an honest service provider that correctly implements the mechanism and only uses the privatized outputs. Secure multiparty computation (SMPC) -based oblivious aggregations can limit the service provider's access to individual user updates and improve DP tradeoffs, but the tradeoffs are still suboptimal, and they suffer from scalability challenges and susceptibility to Sybil attacks. This paper introduces a novel system architecture that leverages trusted execution environments (TEEs) and open-sourcing to both ensure confidentiality of server-side computations and provide externally verifiable privacy properties, bolstering the robustness and trustworthiness of private federated computations.
Problem

Research questions and friction points this paper is trying to address.

Enhance privacy in Federated Learning and Analytics (FLA) systems.
Address limitations of differential privacy and secure multiparty computation.
Improve scalability and robustness against Sybil attacks in FLA.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses trusted execution environments for confidentiality
Integrates open-sourcing for verifiable privacy
Enhances federated learning with secure computations
๐Ÿ”Ž Similar Papers
No similar papers found.
H
Hubert Eichner
Google Research
Daniel Ramage
Daniel Ramage
Google Research
Federated learningFederated analyticsMachine learning
K
Kallista Bonawitz
Google Research
D
Dzmitry Huba
Google Research
Tiziano Santoro
Tiziano Santoro
Google Research
B
Brett McLarnon
Google Research
T
Timon Van Overveldt
Google Research
N
Nova Fallen
Google Research
Peter Kairouz
Peter Kairouz
Research Scientist, Google
Differential PrivacyFederated LearningArtificial IntelligenceMachine LearningInformation Theory
Albert Cheu
Albert Cheu
Research Scientist, Google
Differential Privacy
K
Katharine Daly
Google Research
A
Adria Gascon
Google Research
M
Marco Gruteser
Google Research
B
Brendan McMahan
Google Research