Membership Inference Attacks on LLM-based Recommender Systems

📅 2025-08-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work identifies a novel privacy threat in large language model (LLM)-based recommender systems: membership inference attacks (MIAs) exploiting user-sensitive historical interaction data embedded during in-context learning (ICL). To address this, we systematically design and implement four ICL-specific MIAs—direct querying, hallucination induction, semantic similarity matching, and adversarial poisoning—integrating prompt engineering, semantic similarity analysis, and data poisoning techniques to leverage both ICL mechanisms and recommendation structural properties. Extensive experiments across three mainstream LLMs (Llama-2, Qwen, GLM) and two benchmark recommendation datasets demonstrate that direct querying and adversarial poisoning achieve significantly higher attack success rates than baselines—up to 92.3%. This is the first empirical validation of membership leakage in ICL-based recommendation, establishing foundational insights and practical tools for privacy risk assessment and mitigation in LLM-powered recommender systems.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) based Recommender Systems (RecSys) can flexibly adapt recommendation systems to different domains. It utilizes in-context learning (ICL), i.e., the prompts, to customize the recommendation functions, which include sensitive historical user-specific item interactions, e.g., implicit feedback like clicked items or explicit product reviews. Such private information may be exposed to novel privacy attack. However, no study has been done on this important issue. We design four membership inference attacks (MIAs), aiming to reveal whether victims' historical interactions have been used by system prompts. They are emph{direct inquiry, hallucination, similarity, and poisoning attacks}, each of which utilizes the unique features of LLMs or RecSys. We have carefully evaluated them on three LLMs that have been used to develop ICL-LLM RecSys and two well-known RecSys benchmark datasets. The results confirm that the MIA threat on LLM RecSys is realistic: direct inquiry and poisoning attacks showing significantly high attack advantages. We have also analyzed the factors affecting these attacks, such as the number of shots in system prompts and the position of the victim in the shots.
Problem

Research questions and friction points this paper is trying to address.

Membership inference attacks on LLM-based recommender systems
Revealing if user interactions were used in system prompts
Assessing privacy risks from sensitive historical interaction data
Innovation

Methods, ideas, or system contributions that make the work stand out.

Direct inquiry attacks exploit LLM responses
Poisoning attacks manipulate training data integrity
Hallucination and similarity leverage model output patterns
🔎 Similar Papers
No similar papers found.