Fast, Sample-Efficient, Affine-Invariant Private Mean and Covariance Estimation for Subgaussian Distributions

📅 2023-01-28
🏛️ Annual Conference Computational Learning Theory
📈 Citations: 22
Influential: 3
📄 PDF
🤖 AI Summary
This paper addresses efficient differentially private estimation of the mean and covariance of high-dimensional subgaussian distributions. We propose the first polynomial-time algorithm that achieves affine invariance and Mahalanobis error guarantees with near-optimal sample complexity: for mean estimation, $O(d/alpha^2 + dsqrt{log(1/delta)}/(alphavarepsilon) + dlog(1/delta)/varepsilon)$; for covariance estimation, spectral-norm accuracy when $n gtrsim d^{3/2}$ and Frobenius-norm accuracy when $n gtrsim d^2$. Notably, our method supports private covariance estimation with $n = o(d^2)$ samples—the first such result—overcoming a long-standing barrier. We replace prior exponential-time approaches with a matrix-accelerated implementation running in $O(nd^{omega-1})$ time, where $omega < 2.373$ is the matrix multiplication exponent. This enables fast private learning of Gaussian distributions under total variation distance, establishing new state-of-the-art efficiency and statistical guarantees for high-dimensional private statistics.
📝 Abstract
We present a fast, differentially private algorithm for high-dimensional covariance-aware mean estimation with nearly optimal sample complexity. Only exponential-time estimators were previously known to achieve this guarantee. Given $n$ samples from a (sub-)Gaussian distribution with unknown mean $mu$ and covariance $Sigma$, our $(varepsilon,delta)$-differentially private estimator produces $ ilde{mu}$ such that $|mu - ilde{mu}|_{Sigma} leq alpha$ as long as $n gtrsim frac d {alpha^2} + frac{d sqrt{log 1/delta}}{alpha varepsilon}+frac{dlog 1/delta}{varepsilon}$. The Mahalanobis error metric $|mu - hat{mu}|_{Sigma}$ measures the distance between $hat mu$ and $mu$ relative to $Sigma$; it characterizes the error of the sample mean. Our algorithm runs in time $ ilde{O}(nd^{omega - 1} + nd/varepsilon)$, where $omega<2.38$ is the matrix multiplication exponent. We adapt an exponential-time approach of Brown, Gaboardi, Smith, Ullman, and Zakynthinou (2021), giving efficient variants of stable mean and covariance estimation subroutines that also improve the sample complexity to the nearly optimal bound above. Our stable covariance estimator can be turned to private covariance estimation for unrestricted subgaussian distributions. With $ngtrsim d^{3/2}$ samples, our estimate is accurate in spectral norm. This is the first such algorithm using $n= o(d^2)$ samples, answering an open question posed by Alabi et al. (2022). With $ngtrsim d^2$ samples, our estimate is accurate in Frobenius norm. This leads to a fast, nearly optimal algorithm for private learning of unrestricted Gaussian distributions in TV distance. Duchi, Haque, and Kuditipudi (2023) obtained similar results independently and concurrently.
Problem

Research questions and friction points this paper is trying to address.

Developing efficient private mean estimation for high-dimensional subgaussian distributions
Creating fast differentially private covariance estimation with optimal samples
Solving private learning of Gaussian distributions with minimal sample complexity
Innovation

Methods, ideas, or system contributions that make the work stand out.

Fast differentially private covariance-aware mean estimation
Efficient stable mean and covariance estimation subroutines
Nearly optimal sample complexity for subgaussian distributions
🔎 Similar Papers
G
Gavin Brown
Department of Computer Science, Boston University
Samuel B. Hopkins
Samuel B. Hopkins
Massachusetts Institute of Technology
AlgorithmsComplexityMachine Learning
A
Adam D. Smith
Department of Computer Science, Boston University