🤖 AI Summary
This paper addresses Ashtiani’s conjecture that learnability under total variation (TV) distance implies differentially private learnability. Method: We construct an explicit family of distributions that is learnable to constant TV error with finite samples in the non-private setting, yet provably unlearnable to the same accuracy under $(varepsilon,delta)$-differential privacy. Our analysis combines information-theoretic lower bounds, formal modeling within the private learning framework, and precise characterization of the sample complexity of TV-distance estimation under privacy constraints. Contribution/Results: We provide the first rigorous proof of a fundamental separation between classical and differentially private learnability. The result refutes Ashtiani’s conjecture and establishes that differential privacy imposes intrinsic limitations on statistical learning capacity—beyond mere sample-size overhead. This yields a critical theoretical criterion for the feasibility boundary of private learning and clarifies the inherent trade-off between privacy guarantees and statistical utility in distribution learning.
📝 Abstract
We give an example of a class of distributions that is learnable in total variation distance with a finite number of samples, but not learnable under $(varepsilon, delta)$-differential privacy. This refutes a conjecture of Ashtiani.