Decentralized Differentially Private Power Method

📅 2025-07-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses principal component analysis (PCA) in decentralized multi-agent networks under row-wise data partitioning, where agents hold disjoint feature subsets and no central aggregator exists. Method: We propose the first decentralized power method satisfying distributed differential privacy (DDP). Each agent performs local iterations, exchanges intermediate estimates via embedded consensus, and injects calibrated Gaussian noise to preserve privacy without centralized coordination. Contributions/Results: We theoretically characterize— for the first time—the impact of network topology on the privacy–utility trade-off, combining linear system dynamics with high-dimensional probability analysis to establish rigorous $(varepsilon,delta)$-differential privacy and convergence guarantees. Experiments demonstrate that, under moderate privacy budgets ($varepsilon in [2,5]$), our method significantly outperforms local differential privacy baselines in estimation accuracy, while achieving rapid convergence and flexible per-iteration privacy optimization.

Technology Category

Application Category

📝 Abstract
We propose a novel Decentralized Differentially Private Power Method (D-DP-PM) for performing Principal Component Analysis (PCA) in networked multi-agent settings. Unlike conventional decentralized PCA approaches where each agent accesses the full n-dimensional sample space, we address the challenging scenario where each agent observes only a subset of dimensions through row-wise data partitioning. Our method ensures $(ε,δ)$-Differential Privacy (DP) while enabling collaborative estimation of global eigenvectors across the network without requiring a central aggregator. We achieve this by having agents share only local embeddings of the current eigenvector iterate, leveraging both the inherent privacy from random initialization and carefully calibrated Gaussian noise additions. We prove that our algorithm satisfies the prescribed $(ε,δ)$-DP guarantee and establish convergence rates that explicitly characterize the impact of the network topology. Our theoretical analysis, based on linear dynamics and high-dimensional probability theory, provides tight bounds on both privacy and utility. Experiments on real-world datasets demonstrate that D-DP-PM achieves superior privacy-utility tradeoffs compared to naive local DP approaches, with particularly strong performance in moderate privacy regimes ($εin[2, 5]$). The method converges rapidly, allowing practitioners to trade iterations for enhanced privacy while maintaining competitive utility.
Problem

Research questions and friction points this paper is trying to address.

Decentralized PCA with partial agent data access
Ensuring differential privacy in collaborative eigenvector estimation
Balancing privacy-utility tradeoff in networked multi-agent systems
Innovation

Methods, ideas, or system contributions that make the work stand out.

Decentralized Differentially Private Power Method
Row-wise data partitioning for PCA
Gaussian noise for privacy guarantees
🔎 Similar Papers
No similar papers found.