🤖 AI Summary
This work addresses the problem of differentially private high-dimensional linear query release in distributed settings. Methodologically, it introduces the first distributed High-Dimensional Matrix Mechanism (HDMM) framework that eliminates the need for a trusted curator. It adapts the centralized HDMM to the distributed setting by integrating secure aggregation protocols with distributed optimization solvers, enabling end-to-end secure computation under adversaries—including malicious clients and a malicious aggregator—assuming an honest majority. Differential privacy noise is injected locally at each client, ensuring strong end-to-end privacy guarantees. Experimental evaluation on real-world datasets with thousands of dimensions and up to one thousand clients demonstrates that the framework achieves end-to-end latency under one minute while matching the accuracy of centralized HDMM. The approach thus jointly delivers high accuracy, scalability, and rigorous security without relying on a trusted central authority.
📝 Abstract
We present the Distributed High-Dimensional Matrix Mechanism (Distributed HDMM), a protocol for answering workloads of linear queries on distributed data that provides the accuracy of central-model HDMM without a trusted curator. Distributed HDMM leverages a secure aggregation protocol to evaluate HDMM on distributed data, and is secure in the context of a malicious aggregator and malicious clients (assuming an honest majority). Our preliminary empirical evaluation shows that Distributed HDMM can run on realistic datasets and workloads with thousands of clients in less than one minute.