When Personalization Legitimizes Risks: Uncovering Safety Vulnerabilities in Personalized Dialogue Agents

📅 2026-01-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work identifies and formalizes a novel safety failure mode in personalized dialogue agents—termed “intent legitimization”—where benign memory biases inadvertently legitimize harmful user requests, thereby introducing new security risks. To address this issue, the authors introduce PS-Bench, the first evaluation benchmark for assessing such vulnerabilities, and propose a lightweight mitigation strategy based on a detect-and-reflect mechanism. Experimental results demonstrate that personalization can increase attack success rates by 15.8% to 243.7% compared to stateless baselines, while the proposed method effectively curbs this safety degradation. The approach is validated across multiple large language models, confirming its robustness and generalizability in mitigating intent legitimization risks.

Technology Category

Application Category

📝 Abstract
Long-term memory enables large language model (LLM) agents to support personalized and sustained interactions. However, most work on personalized agents prioritizes utility and user experience, treating memory as a neutral component and largely overlooking its safety implications. In this paper, we reveal intent legitimation, a previously underexplored safety failure in personalized agents, where benign personal memories bias intent inference and cause models to legitimize inherently harmful queries. To study this phenomenon, we introduce PS-Bench, a benchmark designed to identify and quantify intent legitimation in personalized interactions. Across multiple memory-augmented agent frameworks and base LLMs, personalization increases attack success rates by 15.8%-243.7% relative to stateless baselines. We further provide mechanistic evidence for intent legitimation from internal representations space, and propose a lightweight detection-reflection method that effectively reduces safety degradation. Overall, our work provides the first systematic exploration and evaluation of intent legitimation as a safety failure mode that naturally arises from benign, real-world personalization, highlighting the importance of assessing safety under long-term personal context. WARNING: This paper may contain harmful content.
Problem

Research questions and friction points this paper is trying to address.

personalized dialogue agents
safety vulnerabilities
intent legitimation
long-term memory
harmful queries
Innovation

Methods, ideas, or system contributions that make the work stand out.

intent legitimation
personalized dialogue agents
safety vulnerabilities
long-term memory
PS-Bench
🔎 Similar Papers
No similar papers found.