DPQuant: Efficient and Differentially-Private Model Training via Dynamic Quantization Scheduling

πŸ“… 2025-09-03
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Quantization in differentially private stochastic gradient descent (DP-SGD) amplifies quantization variance due to noise injection, causing significant accuracy degradation. To address this, we propose DPQuantβ€”the first noise-aware dynamic quantization scheduling framework. DPQuant guides low-precision quantization via loss sensitivity estimation and integrates probabilistic layer sampling with gradient-sensitive layer prioritization to enable adaptive per-iteration layer selection. By suppressing variance amplification under extremely tight privacy budgets, DPQuant achieves Pareto-optimal trade-offs between accuracy and computational efficiency while guaranteeing rigorous differential privacy. Experiments on ResNet and DenseNet demonstrate that DPQuant attains up to a 2.21Γ— theoretical throughput gain over static quantization, with accuracy loss bounded within 2%, substantially outperforming existing baselines.

Technology Category

Application Category

πŸ“ Abstract
Differentially-Private SGD (DP-SGD) is a powerful technique to protect user privacy when using sensitive data to train neural networks. During training, converting model weights and activations into low-precision formats, i.e., quantization, can drastically reduce training times, energy consumption, and cost, and is thus a widely used technique. In this work, we demonstrate that quantization causes significantly higher accuracy degradation in DP-SGD compared to regular SGD. We observe that this is caused by noise injection in DP-SGD, which amplifies quantization variance, leading to disproportionately large accuracy degradation. To address this challenge, we present QPQuant, a dynamic quantization framework that adaptively selects a changing subset of layers to quantize at each epoch. Our method combines two key ideas that effectively reduce quantization variance: (i) probabilistic sampling of the layers that rotates which layers are quantized every epoch, and (ii) loss-aware layer prioritization, which uses a differentially private loss sensitivity estimator to identify layers that can be quantized with minimal impact on model quality. This estimator consumes a negligible fraction of the overall privacy budget, preserving DP guarantees. Empirical evaluations on ResNet18, ResNet50, and DenseNet121 across a range of datasets demonstrate that DPQuant consistently outperforms static quantization baselines, achieving near Pareto-optimal accuracy-compute trade-offs and up to 2.21x theoretical throughput improvements on low-precision hardware, with less than 2% drop in validation accuracy.
Problem

Research questions and friction points this paper is trying to address.

Quantization increases accuracy degradation in DP-SGD training
Noise injection amplifies quantization variance in private learning
Static quantization methods underperform in differentially private settings
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic quantization scheduling for layers
Probabilistic sampling to rotate quantized layers
Loss-aware prioritization using private sensitivity estimator
πŸ”Ž Similar Papers
No similar papers found.