An Optimization Framework for Differentially Private Sparse Fine-Tuning

📅 2025-03-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the suboptimal parameter selection in sparse fine-tuning of large models under differential privacy (DP). We propose the first learnable sparse parameter selection framework grounded in private gradient information. Unlike fixed-layer update schemes or heuristic strategies relying on public weights, our approach formulates sparse selection as a privacy-preserving optimization problem, dynamically identifying highly sensitive and high-contribution parameter subsets using gradients privatized via DP-SGD. By integrating Rényi DP accounting with gradient sensitivity analysis, we jointly optimize privacy-utility trade-offs. Experiments across multiple vision models and datasets demonstrate that, under strict (ε,δ)-DP guarantees, our method improves average prediction accuracy by 3.2–5.7 percentage points over both full-parameter DP fine-tuning and state-of-the-art sparse DP methods, significantly advancing the privacy-utility frontier.

Technology Category

Application Category

📝 Abstract
Differentially private stochastic gradient descent (DP-SGD) is broadly considered to be the gold standard for training and fine-tuning neural networks under differential privacy (DP). With the increasing availability of high-quality pre-trained model checkpoints (e.g., vision and language models), fine-tuning has become a popular strategy. However, despite recent progress in understanding and applying DP-SGD for private transfer learning tasks, significant challenges remain -- most notably, the performance gap between models fine-tuned with DP-SGD and their non-private counterparts. Sparse fine-tuning on private data has emerged as an alternative to full-model fine-tuning; recent work has shown that privately fine-tuning only a small subset of model weights and keeping the rest of the weights fixed can lead to better performance. In this work, we propose a new approach for sparse fine-tuning of neural networks under DP. Existing work on private sparse finetuning often used fixed choice of trainable weights (e.g., updating only the last layer), or relied on public model's weights to choose the subset of weights to modify. Such choice of weights remains suboptimal. In contrast, we explore an optimization-based approach, where our selection method makes use of the private gradient information, while using off the shelf privacy accounting techniques. Our numerical experiments on several computer vision models and datasets show that our selection method leads to better prediction accuracy, compared to full-model private fine-tuning or existing private sparse fine-tuning approaches.
Problem

Research questions and friction points this paper is trying to address.

Addresses performance gap in DP-SGD fine-tuning
Proposes optimization-based sparse fine-tuning under DP
Improves prediction accuracy over existing methods
Innovation

Methods, ideas, or system contributions that make the work stand out.

Optimization-based sparse fine-tuning under differential privacy.
Uses private gradient information for weight selection.
Improves prediction accuracy over existing private fine-tuning methods.
🔎 Similar Papers