🤖 AI Summary
To address hallucination and erroneous recommendations in large language models (LLMs) caused by insufficient domain knowledge in recommendation tasks, this paper proposes a knowledge-enhanced retrieval-augmented generation (RAG) framework. The method jointly leverages user textual interactions and structured knowledge graph (KG) information. It innovatively employs a pre-trained graph attention network (GAT) for fine-grained, relevance-driven triplet filtering over the KG, enabling high-precision, low-noise knowledge injection. Subsequently, it integrates GraphRAG to dynamically fuse heterogeneous multi-source information—including textual and structural signals—during LLM inference. Extensive experiments on three public benchmark datasets demonstrate that the proposed approach consistently outperforms ten state-of-the-art (SOTA) recommendation models across key metrics, validating its effectiveness and superiority in enhancing both recommendation accuracy and reliability.
📝 Abstract
Large Language Models (LLMs) have shown strong potential in recommender systems due to their contextual learning and generalisation capabilities. Existing LLM-based recommendation approaches typically formulate the recommendation task using specialised prompts designed to leverage their contextual abilities, and aligning their outputs closely with human preferences to yield an improved recommendation performance. However, the use of LLMs for recommendation tasks is limited by the absence of domain-specific knowledge. This lack of relevant relational knowledge about the items to be recommended in the LLM's pre-training corpus can lead to inaccuracies or hallucinations, resulting in incorrect or misleading recommendations. Moreover, directly using information from the knowledge graph introduces redundant and noisy information, which can affect the LLM's reasoning process or exceed its input context length, thereby reducing the performance of LLM-based recommendations. To address the lack of domain-specific knowledge, we propose a novel model called Knowledge-Enhanced Retrieval-Augmented Generation for Recommendation (KERAG_R). Specifically, we leverage a graph retrieval-augmented generation (GraphRAG) component to integrate additional information from a knowledge graph (KG) into instructions, enabling the LLM to collaboratively exploit recommendation signals from both text-based user interactions and the knowledge graph to better estimate the users' preferences in a recommendation context. In particular, we perform graph RAG by pre-training a graph attention network (GAT) to select the most relevant triple for the target users for the used LLM, thereby enhancing the LLM while reducing redundant and noisy information. Our extensive experiments on three public datasets show that our proposed KERAG_R model significantly outperforms ten existing state-of-the-art recommendation methods.