Towards Aligning Personalized Conversational Recommendation Agents with Users' Privacy Preferences

📅 2025-08-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing privacy management models rely on users’ unilateral control over passive tools, rendering them ill-suited to the dynamic, interactive nature of AI agents in personalized conversational recommendation. Method: This paper proposes a novel paradigm—“proactive privacy preference alignment”—which transcends traditional reactive mechanisms. It is the first to integrate Contextual Integrity and Privacy Calculus theories, establishing a formalizable alignment framework that learns from both implicit and explicit user feedback. A Pareto-optimization mechanism is introduced to jointly balance privacy protection and recommendation utility. Contribution/Results: The work systematically identifies and addresses five critical challenges, delivering instantiated technical solutions and representative application scenarios. It provides both theoretical foundations and methodological support for privacy autonomy in AI agents, advancing the design of adaptive, user-centered privacy-aware conversational systems.

Technology Category

Application Category

📝 Abstract
The proliferation of AI agents, with their complex and context-dependent actions, renders conventional privacy paradigms obsolete. This position paper argues that the current model of privacy management, rooted in a user's unilateral control over a passive tool, is inherently mismatched with the dynamic and interactive nature of AI agents. We contend that ensuring effective privacy protection necessitates that the agents proactively align with users' privacy preferences instead of passively waiting for the user to control. To ground this shift, and using personalized conversational recommendation agents as a case, we propose a conceptual framework built on Contextual Integrity (CI) theory and Privacy Calculus theory. This synthesis first reframes automatically controlling users' privacy as an alignment problem, where AI agents initially did not know users' preferences, and would learn their privacy preferences through implicit or explicit feedback. Upon receiving the preference feedback, the agents used alignment and Pareto optimization for aligning preferences and balancing privacy and utility. We introduced formulations and instantiations, potential applications, as well as five challenges.
Problem

Research questions and friction points this paper is trying to address.

Align AI agents with user privacy preferences dynamically
Shift from passive user control to proactive agent alignment
Balance privacy and utility using contextual integrity theory
Innovation

Methods, ideas, or system contributions that make the work stand out.

Proactive alignment with user privacy preferences
Framework combining Contextual Integrity and Privacy Calculus
Pareto optimization for privacy-utility balance
🔎 Similar Papers
No similar papers found.
Shuning Zhang
Shuning Zhang
Tsinghua University
HCIUsable Privacy and SecurityAI
Y
Ying Ma
School of Computing and Information Systems, University of Melbourne, Melbourne, Australia
Jingruo Chen
Jingruo Chen
Cornell University
Human-AI Interaction
S
Simin Li
Beihang University, Beijing, China
X
Xin Yi
Tsinghua University, Beijing, China
H
Hewu Li
Tsinghua University, Beijing, China