🤖 AI Summary
This paper addresses the problem of efficiently estimating a single parameter in parametric models under differential privacy. Existing methods suffer from limitations in computational and statistical efficiency, as well as suboptimal accuracy bounds. To overcome these, we propose a novel framework grounded in the stability of local estimators—marking the first approach that enables self-generation and self-verification of private stability certificates. Our framework supports adaptive privacy mechanism design and achieves asymptotically instance-optimal error bounds, surpassing the accuracy limits of generic mechanisms such as Laplace noise injection or DP-SGD. Theoretical analysis is complemented by Monte Carlo simulations and empirical evaluation on real-world ACS and Census datasets. Results demonstrate that our method attains the optimal convergence rate and significantly outperforms baselines—especially in small-sample and high-dimensional constrained settings—thereby establishing a new paradigm for practical, high-accuracy differentially private parameter estimation.
📝 Abstract
We investigate differentially private estimators for individual parameters within larger parametric models. While generic private estimators exist, the estimators we provide repose on new local notions of estimand stability, and these notions allow procedures that provide private certificates of their own stability. By leveraging these private certificates, we provide computationally and statistical efficient mechanisms that release private statistics that are, at least asymptotically in the sample size, essentially unimprovable: they achieve instance optimal bounds. Additionally, we investigate the practicality of the algorithms both in simulated data and in real-world data from the American Community Survey and US Census, highlighting scenarios in which the new procedures are successful and identifying areas for future work.