🤖 AI Summary
Current conversational recommendation systems (CRSs) rely on centralized data collection and training, posing significant risks of user preference privacy leakage. To address this, we propose FedCRS—the first decentralized CRS framework integrating federated learning with user-level differential privacy (DP). FedCRS enables fully localized modeling of historical user interests and multi-turn real-time preference interactions, eliminating the need to upload raw sensitive data. We formally define privacy-preserving criteria for CRSs for the first time and introduce an interpretable dialogue state tracking mechanism coupled with multi-turn preference modeling—both operating under strict user-level DP guarantees. Extensive experiments demonstrate that FedCRS achieves recommendation performance competitive with state-of-the-art non-private baselines while substantially reducing information leakage risk. The framework thus jointly ensures strong privacy protection, high recommendation accuracy, and model interpretability.
📝 Abstract
Conversational Recommender Systems (CRSs) have become increasingly popular as a powerful tool for providing personalized recommendation experiences. By directly engaging with users in a conversational manner to learn their current and fine-grained preferences, a CRS can quickly derive recommendations that are relevant and justifiable. However, existing conversational recommendation systems (CRSs) typically rely on a centralized training and deployment process, which involves collecting and storing explicitly-communicated user preferences in a centralized repository. These fine-grained user preferences are completely human-interpretable and can easily be used to infer sensitive information (e.g., financial status, political stands, and health information) about the user, if leaked or breached. To address the user privacy concerns in CRS, we first define a set of privacy protection guidelines for preserving user privacy under the conversational recommendation setting. Based on these guidelines, we propose a novel federated conversational recommendation framework that effectively reduces the risk of exposing user privacy by (i) de-centralizing both the historical interests estimation stage and the interactive preference elicitation stage and (ii) strictly bounding privacy leakage by enforcing user-level differential privacy with meticulously selected privacy budgets. Through extensive experiments, we show that the proposed framework not only satisfies these user privacy protection guidelines, but also enables the system to achieve competitive recommendation performance even when compared to the state-of-the-art non-private conversational recommendation approach.