InvisibleInk: High-Utility and Low-Cost Text Generation with Differential Privacy

📅 2025-06-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of integrating private reference information into long-text generation by large language models (LLMs) while simultaneously satisfying differential privacy (DP), preserving generation quality, and maintaining computational efficiency. Methodologically, we propose a novel privacy-preserving generation framework that decouples and clips sensitive logits to reduce privacy budget consumption, and integrates an exponential mechanism over a privately constructed token superset with top-k sampling to enhance decoding stability and text quality. Compared to existing DP text generation approaches, our method achieves comparable utility for long-text generation under strict ε-differential privacy, reduces computational overhead by 8×, and incurs total latency less than 10× that of the non-private baseline—marking the first demonstration of practical trade-offs between high privacy guarantees (low ε), generation quality, and efficiency.

Technology Category

Application Category

📝 Abstract
As major progress in LLM-based long-form text generation enables paradigms such as retrieval-augmented generation (RAG) and inference-time scaling, safely incorporating private information into the generation remains a critical open question. We present InvisibleInk, a highly scalable long-form text generation framework satisfying rigorous differential privacy guarantees with respect to the sensitive references. It interprets sampling from the LLM's next-token-distribution as the exponential mechanism over the LLM logits with two innovations. First, we reduce the privacy cost by isolating and clipping only the sensitive information in the model logits (relative to the public logits). Second, we improve text quality by sampling from a small superset of the top-$k$ private tokens. Empirical evaluations demonstrate a consistent $8 imes$ reduction in computation cost over state-of-the-art baselines to generate long-form private text of the same utility across privacy levels. In summary, InvisibleInk is able to generate private long-form text at less than $10 imes$ the computation cost of non-private generation.
Problem

Research questions and friction points this paper is trying to address.

Safely incorporating private information into LLM text generation
Reducing privacy cost while maintaining text quality
Lowering computation cost for differentially private text generation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Isolates and clips sensitive information in logits
Samples from superset of top-k private tokens
Reduces computation cost by 8x for private text
🔎 Similar Papers
No similar papers found.