Differentially Private Learning of Exponential Distributions: Adaptive Algorithms and Tight Bounds

📅 2025-10-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper studies parameter estimation for the exponential distribution Exp(λ) under pure differential privacy (DP): given n i.i.d. samples, the goal is to design an ε-DP algorithm to privately estimate λ such that the output distribution approximates the true distribution in total variation distance, with extension to the Pareto distribution. We propose two complementary pure ε-DP estimators: a clipped maximum likelihood estimator augmented with Laplace noise, and a quantile estimator leveraging the (1−1/e)-quantile property; these are combined into an adaptive strategy. Our work achieves the first near-optimal sample complexity Θ(1/ε²), establishing tight upper and lower bounds—where the lower bound is derived via packing arguments and group privacy. We further demonstrate the superiority of the adaptive approach for heavy-tailed distributions. Additionally, we show that under (ε,δ)-DP, private estimation is possible without requiring prior knowledge of parameter bounds.

Technology Category

Application Category

📝 Abstract
We study the problem of learning exponential distributions under differential privacy. Given $n$ i.i.d. samples from $mathrm{Exp}(λ)$, the goal is to privately estimate $λ$ so that the learned distribution is close in total variation distance to the truth. We present two complementary pure DP algorithms: one adapts the classical maximum likelihood estimator via clipping and Laplace noise, while the other leverages the fact that the $(1-1/e)$-quantile equals $1/λ$. Each method excels in a different regime, and we combine them into an adaptive best-of-both algorithm achieving near-optimal sample complexity for all $λ$. We further extend our approach to Pareto distributions via a logarithmic reduction, prove nearly matching lower bounds using packing and group privacy cite{Karwa2017FiniteSD}, and show how approximate $(ε,δ)$-DP removes the need for externally supplied bounds. Together, these results give the first tight characterization of exponential distribution learning under DP and illustrate the power of adaptive strategies for heavy-tailed laws.
Problem

Research questions and friction points this paper is trying to address.

Learning exponential distributions under differential privacy constraints
Privately estimating parameter λ from i.i.d. samples
Achieving near-optimal sample complexity via adaptive algorithms
Innovation

Methods, ideas, or system contributions that make the work stand out.

Clips maximum likelihood with Laplace noise
Uses quantile transformation for estimation
Combines methods adaptively for optimal complexity
🔎 Similar Papers
No similar papers found.