Generative Large Recommendation Models: Emerging Trends in LLMs for Recommendation

📅 2025-02-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In the context of information overload, recommender systems urgently require a paradigm shift. This paper focuses on generative large recommendation models (GLRMs)—a largely unexplored direction—introducing the first native generative recommendation framework, distinct from conventional LLM-augmented approaches. Methodologically, it integrates LLM-inspired architecture design, behavioral sequence generation modeling, multi-granularity prompt engineering, lightweight inference, and data distillation to systematically address four core challenges: data quality, scaling laws, user behavior modeling, and inference efficiency. Our contributions are threefold: (1) a rigorous conceptual clarification distinguishing GLRMs from LLM-augmented recommendation; (2) establishment of a reproducible evaluation benchmark and an evolutionary roadmap for GLRMs; and (3) advancement of recommender systems from the traditional “retrieval + re-ranking” pipeline toward an end-to-end native generative paradigm.

Technology Category

Application Category

📝 Abstract
In the era of information overload, recommendation systems play a pivotal role in filtering data and delivering personalized content. Recent advancements in feature interaction and user behavior modeling have significantly enhanced the recall and ranking processes of these systems. With the rise of large language models (LLMs), new opportunities have emerged to further improve recommendation systems. This tutorial explores two primary approaches for integrating LLMs: LLMs-enhanced recommendations, which leverage the reasoning capabilities of general LLMs, and generative large recommendation models, which focus on scaling and sophistication. While the former has been extensively covered in existing literature, the latter remains underexplored. This tutorial aims to fill this gap by providing a comprehensive overview of generative large recommendation models, including their recent advancements, challenges, and potential research directions. Key topics include data quality, scaling laws, user behavior mining, and efficiency in training and inference. By engaging with this tutorial, participants will gain insights into the latest developments and future opportunities in the field, aiding both academic research and practical applications. The timely nature of this exploration supports the rapid evolution of recommendation systems, offering valuable guidance for researchers and practitioners alike.
Problem

Research questions and friction points this paper is trying to address.

Explores generative large recommendation models.
Addresses integration of LLMs in recommendations.
Investigates data quality and scaling challenges.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Leverages LLMs for recommendations
Focuses on scaling recommendation models
Explores user behavior mining techniques
🔎 Similar Papers
No similar papers found.
H
Hao Wang
University of Science and Technology of China, Hefei, Anhui, China
W
Wei Guo
Huawei Noah’s Ark Lab, Singapore, Singapore
Luankang Zhang
Luankang Zhang
University of Science and Technology of China
RSLLMs4Rec
Jin Yao Chin
Jin Yao Chin
Nanyang Technological University
Recommendation SystemsDeep Learning
Yufei Ye
Yufei Ye
Stanford University
Computer Vision
Huifeng Guo
Huifeng Guo
Huawei, Harbin Institute of Technology
Recommender SystemDeep LearningData Mining.
Y
Yong Liu
Huawei Noah’s Ark Lab, Shenzhen, China & Singapore, Singapore
D
Defu Lian
University of Science and Technology of China, Hefei, Anhui, China
R
Ruiming Tang
Huawei Noah’s Ark Lab, Shenzhen, China
Enhong Chen
Enhong Chen
University of Science and Technology of China
data miningrecommender systemmachine learning