Preserving Privacy and Utility in LLM-Based Product Recommendations

πŸ“… 2025-05-02
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
To address the inherent trade-off between user privacy preservation and recommendation utility in LLM-driven recommender systems, this paper proposes a hybrid privacy-preserving recommendation framework. It introduces a sensitive/non-sensitive data separation mechanism: only non-sensitive features are uploaded to the cloud for coarse-grained recommendation generation via LLM invocation, while sensitive user preferences remain entirely on-device and are leveraged by a lightweight deconfounding module to reconstruct fine-grained recommendations locally. This work is the first to achieve full localι—­ηŽ― processing of sensitive data while synergistically harnessing cloud-based LLM capabilities. Experiments on a real-world e-commerce dataset demonstrate significant improvements in HR@10 and category distribution alignment; recommendation quality matches that of the full-data-upload baseline, with efficient deployment feasible on consumer-grade hardware.

Technology Category

Application Category

πŸ“ Abstract
Large Language Model (LLM)-based recommendation systems leverage powerful language models to generate personalized suggestions by processing user interactions and preferences. Unlike traditional recommendation systems that rely on structured data and collaborative filtering, LLM-based models process textual and contextual information, often using cloud-based infrastructure. This raises privacy concerns, as user data is transmitted to remote servers, increasing the risk of exposure and reducing control over personal information. To address this, we propose a hybrid privacy-preserving recommendation framework which separates sensitive from nonsensitive data and only shares the latter with the cloud to harness LLM-powered recommendations. To restore lost recommendations related to obfuscated sensitive data, we design a de-obfuscation module that reconstructs sensitive recommendations locally. Experiments on real-world e-commerce datasets show that our framework achieves almost the same recommendation utility with a system which shares all data with an LLM, while preserving privacy to a large extend. Compared to obfuscation-only techniques, our approach improves HR@10 scores and category distribution alignment, offering a better balance between privacy and recommendation quality. Furthermore, our method runs efficiently on consumer-grade hardware, making privacy-aware LLM-based recommendation systems practical for real-world use.
Problem

Research questions and friction points this paper is trying to address.

Balancing privacy and utility in LLM-based recommendations
Reducing sensitive data exposure in cloud-based LLM systems
Improving recommendation quality while preserving user privacy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hybrid framework separates sensitive and nonsensitive data
De-obfuscation module reconstructs sensitive data locally
Balances privacy and utility with efficient hardware usage
πŸ”Ž Similar Papers
No similar papers found.