🤖 AI Summary
This paper addresses the limitations of traditional discriminative recommendation systems—particularly their reliance on manual feature engineering—by investigating how large language models (LLMs) can empower generative recommendation (GR). We propose a novel LLM-centric GR framework that integrates sequential modeling, contextual reasoning, and content generation, enhanced by prompt engineering, domain knowledge injection, and scenario-specific adaptation. We systematically analyze key industrial deployment challenges—including efficiency, interpretability, and data bias—and survey prevailing practices and their shortcomings. Our primary contributions are threefold: (1) the first comprehensive methodology for LLM-based GR, explicitly delineating its fundamental distinctions from conventional recommendation paradigms; (2) a clear articulation of the evolutionary trajectory toward generalization and intelligence in GR; and (3) a unified technical roadmap and standardized evaluation benchmark to guide both theoretical research and real-world implementation.
📝 Abstract
In the past year, Generative Recommendations (GRs) have undergone substantial advancements, especially in leveraging the powerful sequence modeling and reasoning capabilities of Large Language Models (LLMs) to enhance overall recommendation performance. LLM-based GRs are forming a new paradigm that is distinctly different from discriminative recommendations, showing strong potential to replace traditional recommendation systems heavily dependent on complex hand-crafted features. In this paper, we provide a comprehensive survey aimed at facilitating further research of LLM-based GRs. Initially, we outline the general preliminaries and application cases of LLM-based GRs. Subsequently, we introduce the main considerations when LLM-based GRs are applied in real industrial scenarios. Finally, we explore promising directions for LLM-based GRs. We hope that this survey contributes to the ongoing advancement of the GR domain.