PrivacyRestore: Privacy-Preserving Inference in Large Language Models via Privacy Removal and Restoration

📅 2024-06-03
🏛️ arXiv.org
📈 Citations: 3
Influential: 0
📄 PDF
🤖 AI Summary
To address privacy leakage of user inputs during LLM online inference, this paper proposes a plug-in, model-agnostic privacy-preserving framework: the client identifies and removes privacy-sensitive segments from inputs, generating lightweight meta-recovery vectors; the server then reconstructs private information losslessly during inference via activation steering. We introduce the first formal definition of privacy segments and a novel meta-recovery vector aggregation mechanism, with theoretical proof that our method suppresses linear growth of the differential privacy budget. The approach requires no model fine-tuning, incurs <8% inference latency overhead, degrades task performance by <2%, and ensures 100% irrecoverability of private information from intercepted inputs. Evaluated on three newly constructed domain-specific benchmarks—covering healthcare, legal, and financial scenarios—our method significantly outperforms state-of-the-art baselines.

Technology Category

Application Category

📝 Abstract
The widespread usage of online Large Language Models (LLMs) inference services has raised significant privacy concerns about the potential exposure of private information in user inputs to malicious eavesdroppers. Existing privacy protection methods for LLMs suffer from either insufficient privacy protection, performance degradation, or large inference time overhead. To address these limitations, we propose PrivacyRestore, a plug-and-play method to protect the privacy of user inputs during LLM inference. The server first trains restoration vectors for each privacy span and then release to clients. Privacy span is defined as a contiguous sequence of tokens within a text that contain private information. The client then aggregate restoration vectors of all privacy spans in the input into a single meta restoration vector which is later sent to the server side along with the input without privacy spans.The private information is restored via activation steering during inference. Furthermore, we prove that PrivacyRestore inherently prevents the linear growth of the privacy budget.We create three datasets, covering medical and legal domains, to evaluate the effectiveness of privacy preserving methods. The experimental results show that PrivacyRestore effectively protects private information and maintain acceptable levels of performance and inference overhead.
Problem

Research questions and friction points this paper is trying to address.

Large Language Models
Privacy Protection
Efficiency
Innovation

Methods, ideas, or system contributions that make the work stand out.

PrivacyRestore
Large Language Models
Efficient Privacy Protection
🔎 Similar Papers
No similar papers found.