RALLRec+: Retrieval Augmented Large Language Model Recommendation with Reasoning

📅 2025-03-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the dual challenges of semantic mismatch in retrieval-augmented generation (RAG)-based recommendation and the lack of interpretable reasoning during generation, this paper proposes an LLM-based recommendation framework integrating representation learning with explicit chain-of-thought (CoT) reasoning. Methodologically: (1) it pioneers the incorporation of CoT into recommendation generation to enhance decision transparency; (2) it constructs a multi-source joint representation fusing textual semantics and collaborative signals; (3) it introduces a lightweight temporal re-ranking module to model user interest evolution; and (4) it strengthens retrieval-generation synergy via knowledge-injected prompting and consistency-aware fusion. Evaluated on three real-world datasets, the framework achieves up to 12.7% improvement in Recall@10 over state-of-the-art methods, while simultaneously enhancing reasoning interpretability and long-tail item coverage.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) have been integrated into recommender systems to enhance user behavior comprehension. The Retrieval Augmented Generation (RAG) technique is further incorporated into these systems to retrieve more relevant items and improve system performance. However, existing RAG methods have two shortcomings. extit{(i)} In the extit{retrieval} stage, they rely primarily on textual semantics and often fail to incorporate the most relevant items, thus constraining system effectiveness. extit{(ii)} In the extit{generation} stage, they lack explicit chain-of-thought reasoning, further limiting their potential. In this paper, we propose Representation learning and extbf{R}easoning empowered retrieval- extbf{A}ugmented extbf{L}arge extbf{L}anguage model extbf{Rec}ommendation (RALLRec+). Specifically, for the retrieval stage, we prompt LLMs to generate detailed item descriptions and perform joint representation learning, combining textual and collaborative signals extracted from the LLM and recommendation models, respectively. To account for the time-varying nature of user interests, we propose a simple yet effective reranking method to capture preference dynamics. For the generation phase, we first evaluate reasoning LLMs on recommendation tasks, uncovering valuable insights. Then we introduce knowledge-injected prompting and consistency-based merging approach to integrate reasoning LLMs with general-purpose LLMs, enhancing overall performance. Extensive experiments on three real world datasets validate our method's effectiveness.
Problem

Research questions and friction points this paper is trying to address.

Improves item retrieval by combining textual and collaborative signals
Addresses lack of reasoning in generation via knowledge-injected prompting
Captures dynamic user preferences with effective reranking method
Innovation

Methods, ideas, or system contributions that make the work stand out.

Joint representation learning with textual and collaborative signals
Time-aware reranking for dynamic user preferences
Knowledge-injected prompting with reasoning LLMs
🔎 Similar Papers
No similar papers found.