🤖 AI Summary
Existing privacy management models rely on users’ unilateral control over passive tools, rendering them ill-suited to the dynamic, interactive nature of AI agents in personalized conversational recommendation.
Method: This paper proposes a novel paradigm—“proactive privacy preference alignment”—which transcends traditional reactive mechanisms. It is the first to integrate Contextual Integrity and Privacy Calculus theories, establishing a formalizable alignment framework that learns from both implicit and explicit user feedback. A Pareto-optimization mechanism is introduced to jointly balance privacy protection and recommendation utility.
Contribution/Results: The work systematically identifies and addresses five critical challenges, delivering instantiated technical solutions and representative application scenarios. It provides both theoretical foundations and methodological support for privacy autonomy in AI agents, advancing the design of adaptive, user-centered privacy-aware conversational systems.
📝 Abstract
The proliferation of AI agents, with their complex and context-dependent actions, renders conventional privacy paradigms obsolete. This position paper argues that the current model of privacy management, rooted in a user's unilateral control over a passive tool, is inherently mismatched with the dynamic and interactive nature of AI agents. We contend that ensuring effective privacy protection necessitates that the agents proactively align with users' privacy preferences instead of passively waiting for the user to control. To ground this shift, and using personalized conversational recommendation agents as a case, we propose a conceptual framework built on Contextual Integrity (CI) theory and Privacy Calculus theory. This synthesis first reframes automatically controlling users' privacy as an alignment problem, where AI agents initially did not know users' preferences, and would learn their privacy preferences through implicit or explicit feedback. Upon receiving the preference feedback, the agents used alignment and Pareto optimization for aligning preferences and balancing privacy and utility. We introduced formulations and instantiations, potential applications, as well as five challenges.