Understanding Users' Privacy Perceptions Towards LLM's RAG-based Memory

📅 2025-08-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses three critical user concerns regarding retrieval-augmented generation (RAG)-based large language models (LLMs): ambiguous mental models of LLM “memory,” lack of user control over memory operations, and heightened privacy risks. Through 18 semi-structured interviews and thematic analysis, we identify pervasive cognitive misalignments—particularly users’ skepticism about memory accuracy, persistent anxiety over data misuse, and strong demand for fine-grained, auditable memory controls (i.e., inspectability, editability, and deletability). Building on these findings, we propose the *Transparency-by-Design Memory Interface* principle: explicitly externalizing memory provenance, temporal validity, update mechanisms, and access permissions. This principle establishes a theoretical foundation and actionable design framework for developing trustworthy, controllable, and interpretable LLM memory systems.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) are increasingly integrating memory functionalities to provide personalized and context-aware interactions. However, user understanding, practices and expectations regarding these memory systems are not yet well understood. This paper presents a thematic analysis of semi-structured interviews with 18 users to explore their mental models of LLM's Retrieval Augmented Generation (RAG)-based memory, current usage practices, perceived benefits and drawbacks, privacy concerns and expectations for future memory systems. Our findings reveal diverse and often incomplete mental models of how memory operates. While users appreciate the potential for enhanced personalization and efficiency, significant concerns exist regarding privacy, control and the accuracy of remembered information. Users express a desire for granular control over memory generation, management, usage and updating, including clear mechanisms for reviewing, editing, deleting and categorizing memories, as well as transparent insight into how memories and inferred information are used. We discuss design implications for creating more user-centric, transparent, and trustworthy LLM memory systems.
Problem

Research questions and friction points this paper is trying to address.

Understanding user perceptions of privacy in LLM memory systems
Exploring user expectations for control over memory management
Investigating privacy concerns in RAG-based memory functionalities
Innovation

Methods, ideas, or system contributions that make the work stand out.

Thematic analysis of user interviews
Exploration of RAG-based memory models
Design for user-centric memory control