Empirical Privacy Variance

📅 2025-03-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work identifies a critical phenomenon in differentially private (DP) fine-tuning of language models: under identical $(varepsilon,delta)$-DP theoretical guarantees, distinct hyperparameter configurations yield substantially varying empirical privacy leakage—quantified via memorization—termed *empirical privacy variance*. We formally define this concept for the first time and prove that hyperparameter tuning in DP-SGD entails a no-free-lunch trade-off between empirical privacy and utility. To address this, we propose a novel hyperparameter heuristic explicitly optimizing for empirical privacy while preserving utility. We systematically validate our approach through memorization-based measurement, regression analysis, and multi-dimensional privacy auditing. Experiments demonstrate that our method significantly improves empirical privacy protection without sacrificing model utility, uncovering limitations of existing privacy audits and motivating a verifiable theoretical hypothesis on the relationship between DP parameters, hyperparameters, and empirical leakage.

Technology Category

Application Category

📝 Abstract
We propose the notion of empirical privacy variance and study it in the context of differentially private fine-tuning of language models. Specifically, we show that models calibrated to the same $(varepsilon, delta)$-DP guarantee using DP-SGD with different hyperparameter configurations can exhibit significant variations in empirical privacy, which we quantify through the lens of memorization. We investigate the generality of this phenomenon across multiple dimensions and discuss why it is surprising and relevant. Through regression analysis, we examine how individual and composite hyperparameters influence empirical privacy. The results reveal a no-free-lunch trade-off: existing practices of hyperparameter tuning in DP-SGD, which focus on optimizing utility under a fixed privacy budget, often come at the expense of empirical privacy. To address this, we propose refined heuristics for hyperparameter selection that explicitly account for empirical privacy, showing that they are both precise and practically useful. Finally, we take preliminary steps to understand empirical privacy variance. We propose two hypotheses, identify limitations in existing techniques like privacy auditing, and outline open questions for future research.
Problem

Research questions and friction points this paper is trying to address.

Quantifying empirical privacy variance in differentially private language models
Investigating hyperparameter impact on privacy-utility trade-offs in DP-SGD
Proposing heuristics for hyperparameter selection to improve empirical privacy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces empirical privacy variance concept
Analyzes hyperparameters' impact via regression
Proposes refined heuristics for privacy selection
🔎 Similar Papers
No similar papers found.