🤖 AI Summary
To address unintended or adversarial-context-induced leakage of sensitive information—such as personally identifiable information (PII)—during large language model (LLM) inference, this paper proposes DP-Fusion, the first framework enabling token-level differentially private inference. DP-Fusion achieves provable ε-differential privacy through privacy-group partitioning and output-distribution fusion across multiple forward passes. Unlike existing approaches, it supports fine-grained, parameterized privacy–utility trade-offs with tunable ε, significantly improving the balance between PII suppression and textual utility in tasks like document de-identification. Experiments demonstrate that DP-Fusion effectively mitigates PII leakage across diverse ε settings while preserving semantic coherence and task-specific performance.
📝 Abstract
Large language models (LLMs) can leak sensitive information from their context through generated outputs, either accidentally or when prompted adversarially. Existing defenses that aim to preserve context privacy during inference either lack formal guarantees or suffer from a poor utility/privacy trade-off. We propose DP-Fusion, a token-level Differentially Private Inference (DPI) mechanism that provably bounds how much an LLM's outputs reveal about sensitive tokens in its context. We demonstrate DPI through the task of document privatization, where the goal is to paraphrase documents so that sensitive content (e.g., Personally Identifiable Information, PII) cannot be reliably inferred, while still preserving the overall utility of the text. This is controlled by a parameter $ε$: $ε=0$ hides PII entirely, while higher values trade off privacy for improved paraphrase quality. DP-Fusion works as follows: (i) partition sensitive tokens into disjoint privacy groups, (ii) run the LLM once per group, and (iii) blend the output distributions so that the final output remains within a fixed statistical distance of the baseline distribution produced when no privacy group is revealed. This approach allows fine-grained control over the privacy/utility trade-off but requires multiple LLM forward passes.