How to Train Private Clinical Language Models: A Comparative Study of Privacy-Preserving Pipelines for ICD-9 Coding

📅 2025-11-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the joint optimization of patient privacy preservation and diagnostic accuracy in clinical language model training. We systematically evaluate four differential privacy (DP) strategies—DP-SGD, DP synthetic data generation, DP fine-tuning, and knowledge distillation—for ICD-9 automated coding, under a unified 1B-parameter model and privacy budgets ε = 4–6. To our knowledge, this is the first horizontal comparison of these methods in clinical NLP. Results show that knowledge distillation achieves superior utility–privacy trade-offs at moderate ε: it recovers 63% of the non-private baseline’s diagnostic performance while reducing membership inference attack AUC to ≈0.5—indicating near-optimal privacy protection. Our work establishes an empirical benchmark and a practical pathway toward deployable, privacy-preserving clinical NLP systems.

Technology Category

Application Category

📝 Abstract
Large language models trained on clinical text risk exposing sensitive patient information, yet differential privacy (DP) methods often severely degrade the diagnostic accuracy needed for deployment. Despite rapid progress in DP optimisation and text generation, it remains unclear which privacy-preserving strategy actually works best for clinical language tasks. We present the first systematic head-to-head comparison of four training pipelines for automated diagnostic coding from hospital discharge summaries. All pipelines use identical 1B-parameter models and matched privacy budgets to predict ICD-9 codes. At moderate and relaxed privacy budgets ($varepsilon in {4, 6}$), knowledge distillation from DP-trained teachers outperforms both direct DP-SGD and DP-synthetic data training, recovering up to 63% of the non-private performance whilst maintaining strong empirical privacy (membership-inference AUC $approx$ 0.5). These findings expose large differences in the privacy-utility trade-off across architectures and identify knowledge distillation as the most practical route to privacy-preserving clinical NLP.
Problem

Research questions and friction points this paper is trying to address.

Compares privacy-preserving pipelines for clinical ICD-9 coding tasks
Evaluates trade-offs between diagnostic accuracy and patient data privacy
Identifies optimal methods to maintain utility while ensuring strong privacy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Knowledge distillation from DP-trained teachers
Comparative study of four privacy-preserving pipelines
Recovery of non-private performance while maintaining privacy
M
Mathieu Dufour
Department of Mathematics, Imperial College London
Andrew Duncan
Andrew Duncan
Newcastle University
Mathematics - Geometric and combinatorial group theory