InferDPT: Privacy-Preserving Inference for Black-box Large Language Model

📅 2023-10-18
📈 Citations: 2
Influential: 1
📄 PDF
🤖 AI Summary
This work addresses privacy leakage risks in black-box large language model (LLM) inference—e.g., ChatGPT—by proposing the first practical differentially private text generation framework. Methodologically, it introduces an end-to-end “perturb–extract” dual-module architecture and pioneers RANTEXT, a novel mechanism leveraging *random adjacency* to resist embedding revision attacks. It is also the first to integrate the exponential mechanism with knowledge distillation and retrieval-augmented generation (RAG) for low-overhead private inference. Under ε = 6.0, the framework achieves >90% defense success rate against embedding revision attacks—outperforming SANTEXT+ and CUSTEXT+ by 58% and 235%, respectively—while matching the generation quality of non-private GPT-4. The approach thus uniquely balances strong privacy guarantees, practical deployability, and high output fidelity.
📝 Abstract
Large language models (LLMs), like ChatGPT, have greatly simplified text generation tasks. However, they have also raised concerns about privacy risks such as data leakage and unauthorized data collection. Existing solutions for privacy-preserving inference face practical challenges related to computation time and communication costs. In this paper, we propose InferDPT, the first practical framework for the privacy-preserving Inference of black-box LLMs, implementing Differential Privacy in Text generation. InferDPT comprises two key modules: the"perturbation module"utilizes the exponential mechanism to generate a perturbed prompt, facilitating privacy-preserving inference with black-box LLMs, and the"extraction module", inspired by knowledge distillation and retrieval-augmented generation, extracts coherent and consistent text from the perturbed generation result, ensuring successful text generation completion. To address privacy concerns related to previous exponential mechanisms' susceptibility to embedding revision attacks, we introduce RANTEXT, a novel differential privacy mechanism integrated into the perturbation module of InferDPT, which introduces the concept of"RANdom adjacency"for TEXT perturbation within the prompt. Experimental results across three datasets demonstrate that the text generation quality of InferDPT is comparable to that of non-private GPT-4, and RANTEXT surpasses existing state-of-the-art mechanisms, namely, SANTEXT+ and CUSTEXT+ in the trade-off between privacy and utility. Even with an privacy parameter epsilon value of 6.0, RANTEXT achieves an average privacy protection rate exceeding 90% against embedding revision attacks, which is 0.58 times higher than that of SANTEXT+ and 3.35 times higher than that of CUSTEXT+.
Problem

Research questions and friction points this paper is trying to address.

Addresses privacy risks in black-box LLMs like ChatGPT.
Proposes InferDPT for privacy-preserving text generation.
Introduces RANTEXT to enhance privacy against embedding attacks.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Implements Differential Privacy in Text generation
Introduces RANTEXT for enhanced privacy protection
Combines perturbation and extraction modules effectively
🔎 Similar Papers
No similar papers found.
M
Meng Tong
CAS Key Laboratory of Electro-Magnetic Space Information, University of Science and Technology of China, Hefei 230026, China, and Anhui Province Key Laboratory of Digital Security
Kejiang Chen
Kejiang Chen
Department of Electronic Engineering and Information Science, University of Science and Technology
information hiding,steganography,privacy-preserving
Yuang Qi
Yuang Qi
University of Science and Technology of China
information hidinginformation privacyAI security
J
Jie Zhang
Nanyang Technological University
W
Weiming Zhang
CAS Key Laboratory of Electro-Magnetic Space Information, University of Science and Technology of China, Hefei 230026, China, and Anhui Province Key Laboratory of Digital Security
N
Neng H. Yu
CAS Key Laboratory of Electro-Magnetic Space Information, University of Science and Technology of China, Hefei 230026, China, and Anhui Province Key Laboratory of Digital Security