Distributed Differentially Private Data Analytics via Secure Sketching

📅 2024-11-30
🏛️ IACR Cryptology ePrint Archive
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To resolve the inherent trade-off in distributed differential privacy—namely, the single-point failure risk of the central model versus the excessive noise of the local model—this paper proposes a linear transformation intermediate model. Clients perform privacy-preserving computation within a trusted execution environment (TEE) that executes only publicly known matrix multiplication, while decentralized coordination is achieved via efficient secure multi-party computation. This model establishes, for the first time, a provably sound theoretical bridge between central and local models, inheriting the expressive power of the former and the trust minimization of the latter. Theoretically, it incurs significantly less noise than the local model, with estimation error independent of the number of clients. Empirically, on private low-rank approximation and private ridge regression tasks, it achieves superior data utility, reduced communication overhead, and improved privacy budget efficiency—thereby advancing privacy, utility, and computational efficiency simultaneously.

Technology Category

Application Category

📝 Abstract
We introduce the linear-transformation model, a distributed model of differentially private data analysis. Clients have access to a trusted platform capable of applying a public matrix to their inputs. Such computations can be securely distributed across multiple servers using simple and efficient secure multiparty computation techniques. The linear-transformation model serves as an intermediate model between the highly expressive central model and the minimal local model. In the central model, clients have access to a trusted platform capable of applying any function to their inputs. However, this expressiveness comes at a cost, as it is often prohibitively expensive to distribute such computations, leading to the central model typically being implemented by a single trusted server. In contrast, the local model assumes no trusted platform, which forces clients to add significant noise to their data. The linear-transformation model avoids the single point of failure for privacy present in the central model, while also mitigating the high noise required in the local model. We demonstrate that linear transformations are very useful for differential privacy, allowing for the computation of linear sketches of input data. These sketches largely preserve utility for tasks such as private low-rank approximation and private ridge regression, while introducing only minimal error, critically independent of the number of clients.
Problem

Research questions and friction points this paper is trying to address.

Distributed differentially private data analysis model
Balances central and local model trade-offs
Enables private computations with minimal error
Innovation

Methods, ideas, or system contributions that make the work stand out.

Distributed differentially private data analytics
Secure sketching via linear transformations
Mitigates noise and single point failure
🔎 Similar Papers
No similar papers found.