Large Language Models Are Universal Recommendation Learners

📅 2025-02-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the poor generalizability and high architectural design cost of task-specific models in real-world recommender systems, this paper investigates the feasibility of large language models (LLMs) as universal recommendation learners. We propose three key techniques: (1) a multimodal fusion module to enrich item semantic representations; (2) a sequence-to-set modeling paradigm that improves candidate generation efficiency; and (3) industrial-oriented prompt engineering strategies that significantly enhance LLMs’ instruction following and ranking capabilities for recommendation tasks. Extensive experiments on large-scale industrial datasets demonstrate that our approach achieves performance on par with dedicated expert models across multiple recommendation tasks—including click-through rate prediction, multi-objective ranking, and cold-start recommendation. This work provides the first systematic validation of LLMs as unified, lightweight, and scalable recommendation learners, establishing their practical viability and effectiveness in production settings.

Technology Category

Application Category

📝 Abstract
In real-world recommender systems, different tasks are typically addressed using supervised learning on task-specific datasets with carefully designed model architectures. We demonstrate that large language models (LLMs) can function as universal recommendation learners, capable of handling multiple tasks within a unified input-output framework, eliminating the need for specialized model designs. To improve the recommendation performance of LLMs, we introduce a multimodal fusion module for item representation and a sequence-in-set-out approach for efficient candidate generation. When applied to industrial-scale data, our LLM achieves competitive results with expert models elaborately designed for different recommendation tasks. Furthermore, our analysis reveals that recommendation outcomes are highly sensitive to text input, highlighting the potential of prompt engineering in optimizing industrial-scale recommender systems.
Problem

Research questions and friction points this paper is trying to address.

LLMs as universal recommendation learners
Multimodal fusion enhances item representation
Prompt engineering optimizes recommender systems
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multimodal fusion for item representation
Sequence-in-set-out candidate generation
Prompt engineering for text optimization
🔎 Similar Papers
No similar papers found.
Junguang Jiang
Junguang Jiang
Taobao & Tmall Group of Alibaba, China
Yanwen Huang
Yanwen Huang
PhD Candidate, Department of Pharmaceutical Sciences, Peking University
B
Bin Liu
Taobao & Tmall Group of Alibaba, China
X
Xiaoyu Kong
Taobao & Tmall Group of Alibaba, China
Ziru Xu
Ziru Xu
Alibaba Group
H
Han Zhu
Taobao & Tmall Group of Alibaba, China
J
Jian Xu
Taobao & Tmall Group of Alibaba, China
B
Bo Zheng
Taobao & Tmall Group of Alibaba, China