PeaPOD: Personalized Prompt Distillation for Generative Recommendation

📅 2024-07-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing LLM-based generative recommender systems rely on discrete user/item ID embeddings, causing a semantic gap between IDs and natural language and hindering modeling of inter-user relationships and fine-grained preferences. To address this, we propose a personalized soft prompt distillation framework: it constructs a learnable, shared soft prompt pool and employs a dynamic gating mechanism to weight and compose prompts according to user-specific interests, enabling end-to-end mapping from IDs to semantically grounded prompts. Our approach is the first to unify soft prompt learning, prompt distillation, and dynamic weighted gating within a sequence-to-sequence generation architecture, jointly optimizing recommendation accuracy and explanation relevance. Extensive experiments on three real-world datasets demonstrate state-of-the-art performance across sequential recommendation, Top-N recommendation, and explanation generation tasks.

Technology Category

Application Category

📝 Abstract
Recently, researchers have investigated the capabilities of Large Language Models (LLMs) for generative recommender systems. Existing LLM-based recommender models are trained by adding user and item IDs to a discrete prompt template. However, the disconnect between IDs and natural language makes it difficult for the LLM to learn the relationship between users. To address this issue, we propose a PErsonAlized PrOmpt Distillation (PeaPOD) approach, to distill user preferences as personalized soft prompts. Considering the complexities of user preferences in the real world, we maintain a shared set of learnable prompts that are dynamically weighted based on the user's interests to construct the user-personalized prompt in a compositional manner. Experimental results on three real-world datasets demonstrate the effectiveness of our PeaPOD model on sequential recommendation, top-n recommendation, and explanation generation tasks.
Problem

Research questions and friction points this paper is trying to address.

Enhance LLM-based generative recommendation systems
Improve user-item relationship learning via personalized prompts
Address disconnect between IDs and natural language understanding
Innovation

Methods, ideas, or system contributions that make the work stand out.

Personalized soft prompts distillation
Dynamic weighting of learnable prompts
Compositional user-personalized prompt construction
🔎 Similar Papers
No similar papers found.