Large Language Model Empowered Privacy-Protected Framework for PHI Annotation in Clinical Notes

📅 2025-04-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Addressing the challenge of balancing privacy protection and entity recognition accuracy in clinical text de-identification, this work proposes the Localized Privacy-Preserving Adaptation (LPPA) framework. LPPA employs instruction-tuning to train a lightweight large language model (LLM) on synthetically generated clinical texts—eliminating reliance on real patient data—and integrates rule-enhanced post-processing with a privacy-safe, on-device inference architecture. Compared to state-of-the-art methods, LPPA achieves a 98.2% F1 score for protected health information (PHI) identification across multiple benchmark datasets while reducing inference latency by 67%. Critically, it entirely avoids external API calls and associated data upload risks, enabling fully on-premises, hospital-specific deployment. To our knowledge, LPPA is the first approach to simultaneously achieve high recognition accuracy, strict regulatory compliance (e.g., HIPAA/GDPR), and low operational overhead—thereby unifying precision, privacy, and practical deployability in clinical NLP.

Technology Category

Application Category

📝 Abstract
The de-identification of private information in medical data is a crucial process to mitigate the risk of confidentiality breaches, particularly when patient personal details are not adequately removed before the release of medical records. Although rule-based and learning-based methods have been proposed, they often struggle with limited generalizability and require substantial amounts of annotated data for effective performance. Recent advancements in large language models (LLMs) have shown significant promise in addressing these issues due to their superior language comprehension capabilities. However, LLMs present challenges, including potential privacy risks when using commercial LLM APIs and high computational costs for deploying open-source LLMs locally. In this work, we introduce LPPA, an LLM-empowered Privacy-Protected PHI Annotation framework for clinical notes, targeting the English language. By fine-tuning LLMs locally with synthetic notes, LPPA ensures strong privacy protection and high PHI annotation accuracy. Extensive experiments demonstrate LPPA's effectiveness in accurately de-identifying private information, offering a scalable and efficient solution for enhancing patient privacy protection.
Problem

Research questions and friction points this paper is trying to address.

De-identifying private information in medical data securely
Overcoming limited generalizability in existing de-identification methods
Balancing privacy protection and computational costs in LLM deployment
Innovation

Methods, ideas, or system contributions that make the work stand out.

Locally fine-tuned LLMs for privacy protection
Synthetic notes to enhance annotation accuracy
Scalable framework for clinical PHI de-identification
🔎 Similar Papers
No similar papers found.
G
Guanchen Wu
Department of Computer Science, Emory University, Atlanta, USA
L
Linzhi Zheng
Department of Computer Science, University of Chicago, Chicago, USA
Han Xie
Han Xie
School of Energy and Materials, Shanghai Polytechnic University
Nanoscale Heat TransferMachine Learning
Zhen Xiang
Zhen Xiang
University of Georgia
machine learning
Jiaying Lu
Jiaying Lu
Research Assistant Professor of School of Nursing's Center for Data Science, at Emory University
AI for HealthcareKnowledge GraphMultimodal LearningLarge Language Model
D
Darren Liu
Nell Hodgson Woodruff School of Nursing, Emory University, Atlanta, USA
D
Delgersuren Bold
Nell Hodgson Woodruff School of Nursing, Emory University, Atlanta, USA
B
Bo Li
Department of Computer Science, University of Chicago, Chicago, USA
X
Xiao Hu
Nell Hodgson Woodruff School of Nursing, Emory University, Atlanta, USA
Carl Yang
Carl Yang
Waymo LLC, PhD at University of California, Davis
GPU ComputingParallel ComputingGraph Processing