RDRec: Rationale Distillation for LLM-based Recommendation

📅 2024-05-17
🏛️ Annual Meeting of the Association for Computational Linguistics
📈 Citations: 4
Influential: 1
📄 PDF
🤖 AI Summary
Existing LLM-based recommendation methods neglect fine-grained semantic modeling of user preferences and item characteristics, resulting in limited interpretability and suboptimal accuracy. To address this, we propose RDRec, a rationale distillation framework that pioneers the use of large-model-generated interaction rationales as supervisory signals to distill knowledge into a lightweight student model. RDRec explicitly extracts and structurally represents implicit user preferences and item attribute rationales from raw user reviews. It further integrates rationale-augmented sequential modeling with Top-N recommendation. Extensive experiments on multiple public benchmarks demonstrate that RDRec achieves state-of-the-art performance, improving Recall@20 by up to 12.3% over strong baselines. The method significantly enhances both recommendation accuracy and interpretability while maintaining strong generalization capability and deployment efficiency.

Technology Category

Application Category

📝 Abstract
Large language model (LLM)-based recommender models that bridge users and items through textual prompts for effective semantic reasoning have gained considerable attention. However, few methods consider the underlying rationales behind interactions, such as user preferences and item attributes, limiting the reasoning capability of LLMs for recommendations. This paper proposes a rationale distillation recommender (RDRec), a compact model designed to learn rationales generated by a larger language model (LM). By leveraging rationales from reviews related to users and items, RDRec remarkably specifies their profiles for recommendations. Experiments show that RDRec achieves state-of-the-art (SOTA) performance in both top-N and sequential recommendations. Our source code is released at https://github.com/WangXFng/RDRec.
Problem

Research questions and friction points this paper is trying to address.

Large Language Models
User Preferences
Item Characteristics
Innovation

Methods, ideas, or system contributions that make the work stand out.

RDRec
Large Language Models
Enhanced Recommendation Accuracy
🔎 Similar Papers
No similar papers found.
X
Xinfeng Wang
Graduate School of Engineering, University of Yamanashi, Kofu, Japan
Jin Cui
Jin Cui
Principal Engineer
Embedded SystemOS Kernel & DriverHypervisor & VirtualizationComputer uArch modellingFPGA & EDA
Y
Yoshimi Suzuki
Interdisciplinary Graduate School, University of Yamanashi, Kofu, Japan
Fumiyo Fukumoto
Fumiyo Fukumoto
Interdisciplinary Graduate School, University of Yamanashi, Kofu, Japan