RecLM: Recommendation Instruction Tuning

📅 2024-12-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address insufficient user preference modeling in sparse-data and zero-shot recommendation scenarios, this paper proposes a model-agnostic instruction-tuning paradigm for recommender systems. Our method synergizes large language models (LLMs) with collaborative filtering: a graph neural network encodes the user-item interaction structure, while a preference-diversity-aware reinforcement learning (RL) reward function guides LLMs to self-enhance preference understanding during instruction tuning. Key contributions include: (1) the first instruction-tuning framework specifically designed for recommendation tasks; (2) a transferable RL reward mechanism that improves robustness in preference modeling; and (3) plug-and-play compatibility with existing recommender models. Extensive experiments on multiple benchmark datasets demonstrate significant improvements in Recall@K and NDCG across sparse and zero-shot settings, confirming strong generalization capability. The implementation is publicly available.

Technology Category

Application Category

📝 Abstract
Modern recommender systems aim to deeply understand users' complex preferences through their past interactions. While deep collaborative filtering approaches using Graph Neural Networks (GNNs) excel at capturing user-item relationships, their effectiveness is limited when handling sparse data or zero-shot scenarios, primarily due to constraints in ID-based embedding functions. To address these challenges, we propose a model-agnostic recommendation instruction-tuning paradigm that seamlessly integrates large language models with collaborative filtering. Our proposed $underline{Rec}$ommendation $underline{L}$anguage $underline{M}$odel (RecLM) enhances the capture of user preference diversity through a carefully designed reinforcement learning reward function that facilitates self-augmentation of language models. Comprehensive evaluations demonstrate significant advantages of our approach across various settings, and its plug-and-play compatibility with state-of-the-art recommender systems results in notable performance enhancements. The implementation of our RecLM framework is publicly available at: https://github.com/HKUDS/RecLM.
Problem

Research questions and friction points this paper is trying to address.

Recommendation Systems
Data Sparsity
New Item Cold Start
Innovation

Methods, ideas, or system contributions that make the work stand out.

RecLM
Graph Neural Networks
Reward Mechanism
🔎 Similar Papers
No similar papers found.