Differentially Private Block-wise Gradient Shuffle for Deep Learning

📅 2024-07-31
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the fundamental trade-off among privacy preservation, model utility, and training efficiency in differentially private deep learning, this paper proposes Block-wise Gradient Shuffling (BGS), a novel framework that replaces conventional Gaussian noise injection with probabilistic gradient reordering and block-level adaptive clipping—including hierarchical batch clipping and gradient accumulation—to achieve more robust defense against data extraction attacks from an information-theoretic perspective. Theoretical analysis provides a rigorous privacy budget guarantee under Rényi Differential Privacy. Empirical evaluation demonstrates that BGS attains training speed comparable to non-private training, matches the accuracy of DP-SGD, and significantly enhances resilience against membership inference and data reconstruction attacks. Its core innovation lies in the first systematic integration of gradient shuffling into a block-level adaptive architecture, thereby achieving a balanced triad of privacy, utility, and efficiency.

Technology Category

Application Category

📝 Abstract
Traditional Differentially Private Stochastic Gradient Descent (DP-SGD) introduces statistical noise on top of gradients drawn from a Gaussian distribution to ensure privacy. This paper introduces the novel Differentially Private Block-wise Gradient Shuffle (DP-BloGS) algorithm for deep learning. BloGS builds off of existing private deep learning literature, but makes a definitive shift by taking a probabilistic approach to gradient noise introduction through shuffling modeled after information theoretic privacy analyses. The theoretical results presented in this paper show that the combination of shuffling, parameter-specific block size selection, batch layer clipping, and gradient accumulation allows DP-BloGS to achieve training times close to that of non-private training while maintaining similar privacy and utility guarantees to DP-SGD. DP-BloGS is found to be significantly more resistant to data extraction attempts than DP-SGD. The theoretical results are validated by the experimental findings.
Problem

Research questions and friction points this paper is trying to address.

Privacy Preservation
Deep Learning
Performance Maintenance
Innovation

Methods, ideas, or system contributions that make the work stand out.

DP-BloGS
Privacy Protection
Gradient Shuffling
🔎 Similar Papers
No similar papers found.
D
David Zagardo