PLRV-O: Advancing Differentially Private Deep Learning via Privacy Loss Random Variable Optimization

๐Ÿ“… 2025-09-07
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Traditional DP-SGD employs fixed-form noise (e.g., Gaussian or Laplace), where a single parameter governs both privacy loss and utility degradation, resulting in strong coupling that impedes flexible trade-offs across diverse training configurations (e.g., varying iteration count $T$ or batch size $B$). To address this, we propose PLRV-O, a framework that breaks the single-degree-of-freedom constraint by directly optimizing the Privacy Loss Random Variable (PLRV) to construct a tunable noise distribution search spaceโ€”enabling decoupled optimization of privacy and utility losses. PLRV-O supports adaptive configuration across model scale, training duration, and sampling strategy, and integrates a tight moment accountant for rigorous privacy accounting. Experiments demonstrate substantial improvements: on CIFAR-10 at $varepsilon approx 0.5$, accuracy reaches 94.03% (+10.1 percentage points); on SST-2 at $varepsilon approx 0.2$, it achieves 92.20% (+41.95 percentage points), significantly outperforming standard DP-SGD baselines.

Technology Category

Application Category

๐Ÿ“ Abstract
Differentially Private Stochastic Gradient Descent (DP-SGD) is a standard method for enforcing privacy in deep learning, typically using the Gaussian mechanism to perturb gradient updates. However, conventional mechanisms such as Gaussian and Laplacian noise are parameterized only by variance or scale. This single degree of freedom ties the magnitude of noise directly to both privacy loss and utility degradation, preventing independent control of these two factors. The problem becomes more pronounced when the number of composition rounds T and batch size B vary across tasks, as these variations induce task-dependent shifts in the privacy-utility trade-off, where small changes in noise parameters can disproportionately affect model accuracy. To address this limitation, we introduce PLRV-O, a framework that defines a broad search space of parameterized DP-SGD noise distributions, where privacy loss moments are tightly characterized yet can be optimized more independently with respect to utility loss. This formulation enables systematic adaptation of noise to task-specific requirements, including (i) model size, (ii) training duration, (iii) batch sampling strategies, and (iv) clipping thresholds under both training and fine-tuning settings. Empirical results demonstrate that PLRV-O substantially improves utility under strict privacy constraints. On CIFAR-10, a fine-tuned ViT achieves 94.03% accuracy at epsilon approximately 0.5, compared to 83.93% with Gaussian noise. On SST-2, RoBERTa-large reaches 92.20% accuracy at epsilon approximately 0.2, versus 50.25% with Gaussian.
Problem

Research questions and friction points this paper is trying to address.

Optimizing noise distribution for independent privacy-utility control
Addressing task-dependent privacy-utility trade-off variations
Improving model accuracy under strict differential privacy constraints
Innovation

Methods, ideas, or system contributions that make the work stand out.

Parameterized noise distributions for independent privacy-utility control
Optimized noise adaptation to task-specific requirements
Tight characterization of privacy loss moments during optimization
๐Ÿ”Ž Similar Papers
No similar papers found.
Q
Qin Yang
University of Connecticut, Storrs, USA
N
Nicholas Stout
Iowa State University, Ames, USA
Meisam Mohammady
Meisam Mohammady
Assistant Professor at Iowa State University
Differential PrivacyFederated Machine LearningSecure Multiparty Computation
H
Han Wang
The University of Kansas, Lawrence, USA
A
Ayesha Samreen
Iowa State University, Ames, USA
C
Christopher J Quinn
Iowa State University, Ames, USA
Y
Yan Yan
University of Illinois at Chicago, Chicago, USA
Ashish Kundu
Ashish Kundu
Head of Cybersecurity Research, Cisco Research
SecurityPrivacy & Compliance
Yuan Hong
Yuan Hong
University of Connecticut
SecurityPrivacyAI SecurityApplied Cryptography