🤖 AI Summary
This work addresses the intricate trade-off among computational cost, privacy budget, and model utility in differentially private (DP) large language model (LLM) training. We establish, for the first time, an exact scaling law tailored to DP-LLMs. Methodologically, we systematically model the impact of noise accumulation and gradient clipping in DP-SGD on scaling behavior, integrating empirical fitting with joint privacy-utility optimization. Our law achieves prediction errors under 5% across diverse model scales and privacy budgets (ε ∈ [1, 8]). The core contributions are threefold: (i) filling a critical theoretical gap in DP-LLM scaling formalism; (ii) uncovering nontrivial, DP-specific scaling规律—e.g., sublinear utility growth under fixed ε and superlinear privacy cost under fixed model size; and (iii) providing actionable guidelines for privacy budget allocation and co-scaling of model architecture, dataset size, and hyperparameters—thereby significantly improving training efficiency and practical deployability of DP-LLMs.
📝 Abstract
Scaling laws have emerged as important components of large language model (LLM) training as they can predict performance gains through scale, and provide guidance on important hyper-parameter choices that would otherwise be expensive. LLMs also rely on large, high-quality training datasets, like those sourced from (sometimes sensitive) user data. Training models on this sensitive user data requires careful privacy protections like differential privacy (DP). However, the dynamics of DP training are significantly different, and consequently their scaling laws are not yet fully understood. In this work, we establish scaling laws that accurately model the intricacies of DP LLM training, providing a complete picture of the compute-privacy-utility tradeoffs and the optimal training configurations in many settings.