🤖 AI Summary
This study investigates the impact of differential privacy (DP) on the performance of language identification and generation tasks, characterizing fundamental limits within an agnostic statistical learning framework. By integrating information-theoretic and statistical learning arguments, the work establishes matching upper and lower bounds for both (ε,δ)-approximate DP and pure ε-DP. The primary contributions are twofold: it demonstrates for the first time in language learning that approximate DP can achieve the same optimal error rate as in the non-private setting, and shows that under pure DP, the exponential convergence rate degrades by at most a factor of min{1,ε}, with this bound being tight. These results precisely quantify the performance cost incurred by enforcing privacy guarantees.
📝 Abstract
As large language models (LLMs) are increasingly trained on sensitive user data, understanding the fundamental cost of privacy in language learning becomes essential. We initiate the study of differentially private (DP) language identification and generation in the agnostic statistical setting, establishing algorithms and matching lower bounds that precisely quantify the cost of privacy. For both tasks, approximate $(\varepsilon, δ)$-DP with constant $\varepsilon > 0$ recovers the non-private error rates: $\exp(-r(n))$ for identification (for any $r(n) = o(n)$) and $\exp(-Ω(n))$ for generation. Under pure $\varepsilon$-DP, the exponents degrade by a multiplicative factor of $\min\{1, \varepsilon\}$, which we show is tight up to constants. Notably, for generation under pure DP with mild assumptions, the upper bound $\exp(-\min\{1,\varepsilon\} \cdot Ω(n))$ matches the lower bound up to some constants, establishing an optimal rate. Our results show that the cost of privacy in language learning is surprisingly mild: absent entirely under approximate DP, and exactly a $\min\{1,\varepsilon\}$ factor in the exponent under pure DP.