🤖 AI Summary
This work addresses the challenges of differential privacy guarantees, model compression efficiency, and error accumulation in autoregressive generation when fine-tuning large language models on sensitive-domain corpora. The authors propose a synthetic-text-free differentially private knowledge distillation framework that applies DP-SGD solely to the student model, leveraging a frozen teacher model to provide fine-grained, token-level supervision along its own generation trajectories. This approach reduces differentially private compression to a single student training loop, eliminating the need for differentially private teacher training or offline synthetic data generation. By employing on-policy distillation, the method achieves a better trade-off between generation quality and privacy utility. Under a strict privacy budget (ε=2.0), the model attains perplexities of 41.68 and 30.63 on the Yelp and BigPatent datasets, respectively, significantly outperforming existing differentially private fine-tuning and off-policy distillation methods.
📝 Abstract
Large language models (LLMs) are increasingly adapted to proprietary and domain-specific corpora that contain sensitive information, creating a tension between formal privacy guarantees and efficient deployment through model compression. Differential privacy (DP), typically enforced via DP-SGD, provides record-level protection but often incurs substantial utility loss in autoregressive generation, where optimization noise can amplify exposure bias and compounding errors along long rollouts. Existing approaches to private distillation either apply DP-SGD to both teacher and student, worsening computation and the privacy--utility tradeoff, or rely on DP synthetic text generation from a DP-trained teacher, avoiding DP on the student at the cost of DP-optimizing a large teacher and introducing an offline generation pipeline. We propose \textbf{Differentially Private On-Policy Distillation (DP-OPD)}, a synthesis-free framework that enforces privacy solely through DP-SGD on the student while leveraging a frozen teacher to provide dense token-level targets on \emph{student-generated} trajectories. DP-OPD instantiates this idea via \emph{private generalized knowledge distillation} on continuation tokens. Under a strict privacy budget ($\varepsilon=2.0$), DP-OPD improves perplexity over DP fine-tuning and off-policy DP distillation, and outperforms synthesis-based DP distillation (Yelp: 44.15$\rightarrow$41.68; BigPatent: 32.43$\rightarrow$30.63), while substantially simplifying the training pipeline. In particular, \textbf{DP-OPD collapses private compression into a single DP student-training loop} by eliminating DP teacher training and offline synthetic text generation. Code will be released upon publication at https://github.com/khademfatemeh/dp_opd.