🤖 AI Summary
This study addresses the joint optimization of patient privacy preservation and diagnostic accuracy in clinical language model training. We systematically evaluate four differential privacy (DP) strategies—DP-SGD, DP synthetic data generation, DP fine-tuning, and knowledge distillation—for ICD-9 automated coding, under a unified 1B-parameter model and privacy budgets ε = 4–6. To our knowledge, this is the first horizontal comparison of these methods in clinical NLP. Results show that knowledge distillation achieves superior utility–privacy trade-offs at moderate ε: it recovers 63% of the non-private baseline’s diagnostic performance while reducing membership inference attack AUC to ≈0.5—indicating near-optimal privacy protection. Our work establishes an empirical benchmark and a practical pathway toward deployable, privacy-preserving clinical NLP systems.
📝 Abstract
Large language models trained on clinical text risk exposing sensitive patient information, yet differential privacy (DP) methods often severely degrade the diagnostic accuracy needed for deployment. Despite rapid progress in DP optimisation and text generation, it remains unclear which privacy-preserving strategy actually works best for clinical language tasks. We present the first systematic head-to-head comparison of four training pipelines for automated diagnostic coding from hospital discharge summaries. All pipelines use identical 1B-parameter models and matched privacy budgets to predict ICD-9 codes. At moderate and relaxed privacy budgets ($varepsilon in {4, 6}$), knowledge distillation from DP-trained teachers outperforms both direct DP-SGD and DP-synthetic data training, recovering up to 63% of the non-private performance whilst maintaining strong empirical privacy (membership-inference AUC $approx$ 0.5). These findings expose large differences in the privacy-utility trade-off across architectures and identify knowledge distillation as the most practical route to privacy-preserving clinical NLP.