🤖 AI Summary
Large language models (LLMs) often inadvertently disclose personally identifiable information (PII) when generating summaries of sensitive-domain texts (e.g., clinical or legal documents), posing serious privacy risks.
Method: We conduct the first systematic, cross-model evaluation of PII leakage in privacy-preserving summarization across six closed- and open-source LLMs—spanning diverse architectures and parameter scales—and benchmark two privacy-control paradigms: prompt engineering and fine-tuning. Evaluation integrates quantitative metrics (PII recall rate, privacy score), qualitative analysis, and human assessment.
Results: All tested LLMs exhibit significant PII leakage, with privacy protection substantially inferior to human summarizers; humans achieve near-zero PII exposure while preserving summary quality. This work reveals a critical privacy-security bottleneck in LLM-based sensitive-text summarization and establishes a rigorous, empirically grounded evaluation framework to advance trustworthy AI deployment in high-stakes domains.
📝 Abstract
In sensitive domains such as medical and legal, protecting sensitive information is critical, with protective laws strictly prohibiting the disclosure of personal data. This poses challenges for sharing valuable data such as medical reports and legal cases summaries. While language models (LMs) have shown strong performance in text summarization, it is still an open question to what extent they can provide privacy-preserving summaries from non-private source documents. In this paper, we perform a comprehensive study of privacy risks in LM-based summarization across two closed- and four open-weight models of different sizes and families. We experiment with both prompting and fine-tuning strategies for privacy-preservation across a range of summarization datasets including medical and legal domains. Our quantitative and qualitative analysis, including human evaluation, shows that LMs frequently leak personally identifiable information in their summaries, in contrast to human-generated privacy-preserving summaries, which demonstrate significantly higher privacy protection levels. These findings highlight a substantial gap between current LM capabilities and expert human expert performance in privacy-sensitive summarization tasks.