🤖 AI Summary
This work addresses the risk of sensitive personally identifiable information (PII) leakage in user prompts submitted to large language models. The authors propose a character-level differential privacy mechanism that applies random perturbations to each character in the input prompt, enabling differentiated protection of sensitive and non-sensitive content without requiring explicit identification or annotation of PII. The perturbed prompt is first reconstructed by a remote language model before downstream task execution, leveraging contextual cues and word frequency statistics to enhance the fidelity of non-sensitive content recovery. Experimental results on the i2b2/UTHealth and Enron datasets demonstrate that the method reduces the reconstruction rate of sensitive PII to near theoretical random levels while significantly outperforming word-level differential privacy and rule-based sanitization baselines, achieving a strong balance between privacy preservation and utility retention.
📝 Abstract
Large Language Models (LLMs) generate responses based on user prompts. Often, these prompts may contain highly sensitive information, including personally identifiable information (PII), which could be exposed to third parties hosting these models. In this work, we propose a new method to sanitize user prompts. Our mechanism uses the randomized response mechanism of differential privacy to randomly and independently perturb each character in a word. The perturbed text is then sent to a remote LLM, which first performs a prompt restoration and subsequently performs the intended downstream task. The idea is that the restoration will be able to reconstruct non-sensitive words even when they are perturbed due to cues from the context, as well as the fact that these words are often very common. On the other hand, perturbation would make reconstruction of sensitive words difficult because they are rare. We experimentally validate our method on two datasets, i2b2/UTHealth and Enron, using two LLMs: Llama-3.1 8B Instruct and GPT-4o mini. We also compare our approach with a word-level differentially private mechanism, and with a rule-based PII redaction baseline, using a unified privacy-utility evaluation. Our results show that sensitive PII tagged in these datasets are reconstructed at a rate close to the theoretical rate of reconstructing completely random words, whereas non-sensitive words are reconstructed at a much higher rate. Our method has the advantage that it can be applied without explicitly identifying sensitive pieces of information in the prompt, while showing a good privacy-utility tradeoff for downstream tasks.