🤖 AI Summary
Knowledge-intensive large language models (LLMs) face privacy leakage risks from membership inference attacks (MIAs) in both retrieval-augmented generation (RAG) and supervised fine-tuning (SFT) paradigms. To address this, we propose EPD—the first model-agnostic, integrated privacy defense framework—that enhances inference-time privacy robustness by synergistically fusing outputs from a knowledge injection model, the base LLM, and a discriminative model. EPD requires no architecture-specific modifications or retraining, and is the first to unify multi-source response modeling across RAG and SFT settings to suppress MIA signals. Experiments demonstrate that EPD reduces MIA success rates by 27.8% on average in SFT and by 526.3% (relative to baseline) in RAG, while preserving generation quality. Its core contribution lies in introducing a lightweight, general-purpose, plug-and-play multi-model ensemble defense paradigm.
📝 Abstract
Retrieval-Augmented Generation (RAG) and Supervised Finetuning (SFT) have become the predominant paradigms for equipping Large Language Models (LLMs) with external knowledge for diverse, knowledge-intensive tasks. However, while such knowledge injection improves performance, it also exposes new attack surfaces. Membership Inference Attacks (MIAs), which aim to determine whether a given data sample was included in a model's training set, pose serious threats to privacy and trust in sensitive domains. To this end, we first systematically evaluate the vulnerability of RAG- and SFT-based LLMs to various MIAs. Then, to address the privacy risk, we further introduce a novel, model-agnostic defense framework, Ensemble Privacy Defense (EPD), which aggregates and evaluates the outputs of a knowledge-injected LLM, a base LLM, and a dedicated judge model to enhance resistance against MIAs. Comprehensive experiments show that, on average, EPD reduces MIA success by up to 27.8% for SFT and 526.3% for RAG compared to inference-time baseline, while maintaining answer quality.