$mathsf{OPA}$: One-shot Private Aggregation with Single Client Interaction and its Applications to Federated Learning

📅 2024-10-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the high communication overhead and weak fault tolerance of multi-round secure aggregation in federated learning, this paper proposes OPA—a one-round private aggregation protocol. In each round, each client requires only a single interaction (or zero interactions), enabling native support for dynamic client participation and dropout resilience. OPA is the first private aggregation scheme achieving both adaptive security and one-round efficiency without relying on committee election, thereby breaking from conventional multi-round paradigms. It is built upon post-quantum cryptographic primitives—including Learning-with-Rounding (LWR)/Learning-with-Errors (LWE) and the Decisional Composite Residuosity (DCR) assumption—and offers two concrete constructions: (i) threshold-key homomorphic pseudorandom functions, and (ii) seed-homomorphic pseudorandom generators combined with secret sharing. Extensive experiments on MNIST, CIFAR-10, and CIFAR-100 using MLP and logistic regression demonstrate that OPA significantly reduces communication costs compared to state-of-the-art schemes, while maintaining rigorous theoretical security guarantees and practical deployability.

Technology Category

Application Category

📝 Abstract
Our work aims to minimize interaction in secure computation due to the high cost and challenges associated with communication rounds, particularly in scenarios with many clients. In this work, we revisit the problem of secure aggregation in the single-server setting where a single evaluation server can securely aggregate client-held individual inputs. Our key contribution is the introduction of One-shot Private Aggregation ($mathsf{OPA}$) where clients speak only once (or even choose not to speak) per aggregation evaluation. Since each client communicates only once per aggregation, this simplifies managing dropouts and dynamic participation, contrasting with multi-round protocols and aligning with plaintext secure aggregation, where clients interact only once. We construct $mathsf{OPA}$ based on LWR, LWE, class groups, DCR and demonstrate applications to privacy-preserving Federated Learning (FL) where clients emph{speak once}. This is a sharp departure from prior multi-round FL protocols whose study was initiated by Bonawitz et al. (CCS, 2017). Moreover, unlike the YOSO (You Only Speak Once) model for general secure computation, $mathsf{OPA}$ eliminates complex committee selection protocols to achieve adaptive security. Beyond asymptotic improvements, $mathsf{OPA}$ is practical, outperforming state-of-the-art solutions. We benchmark logistic regression classifiers for two datasets, while also building an MLP classifier to train on MNIST, CIFAR-10, and CIFAR-100 datasets. We build two flavors of $caps$ (1) from (threshold) key homomorphic PRF and (2) from seed homomorphic PRG and secret sharing.
Problem

Research questions and friction points this paper is trying to address.

Minimizing client interaction in secure aggregation protocols
Enabling single-round communication for privacy-preserving federated learning
Eliminating complex committee selection for adaptive security
Innovation

Methods, ideas, or system contributions that make the work stand out.

One-shot client interaction for private aggregation
Uses LWR, LWE, class groups, DCR foundations
Eliminates complex committee selection protocols
🔎 Similar Papers
No similar papers found.
H
Harish Karthikeyan
J.P. Morgan AI Research, J.P. Morgan AlgoCRYPT Center of Excellence, New York, NY, USA
Antigoni Polychroniadou
Antigoni Polychroniadou
Executive Director, JPMorgan AI Research - Head of JPMorgan AlgoCRYPT CoE
Cryptography