Enhancing LLM-based Recommendation with Preference Hint Discovery from Knowledge Graph

📅 2026-01-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of effectively modeling complex user preferences in large language model (LLM)-based recommender systems under sparse interactions and noisy attributes. To this end, the authors propose a preference prompt discovery framework that integrates user interaction data with knowledge graphs. The framework employs a collaborative preference prompt extraction mechanism and an instance-level dual-attention mechanism to accurately identify personalized semantic prompts for unseen items. Furthermore, it adopts a flattened prompt organization strategy to reduce input redundancy. By synergistically combining knowledge graphs, large language models, and prompt engineering, the proposed method consistently outperforms existing baselines on both pairwise and listwise recommendation tasks, achieving an average relative performance improvement of over 3.02%.

Technology Category

Application Category

📝 Abstract
LLMs have garnered substantial attention in recommendation systems. Yet they fall short of traditional recommenders when capturing complex preference patterns. Recent works have tried integrating traditional recommendation embeddings into LLMs to resolve this issue, yet a core gap persists between their continuous embedding and discrete semantic spaces. Intuitively, textual attributes derived from interactions can serve as critical preference rationales for LLMs'recommendation logic. However, directly inputting such attribute knowledge presents two core challenges: (1) Deficiency of sparse interactions in reflecting preference hints for unseen items; (2) Substantial noise introduction from treating all attributes as hints. To this end, we propose a preference hint discovery model based on the interaction-integrated knowledge graph, enhancing LLM-based recommendation. It utilizes traditional recommendation principles to selectively extract crucial attributes as hints. Specifically, we design a collaborative preference hint extraction schema, which utilizes semantic knowledge from similar users'explicit interactions as hints for unseen items. Furthermore, we develop an instance-wise dual-attention mechanism to quantify the preference credibility of candidate attributes, identifying hints specific to each unseen item. Using these item- and user-based hints, we adopt a flattened hint organization method to shorten input length and feed the textual hint information to the LLM for commonsense reasoning. Extensive experiments on both pair-wise and list-wise recommendation tasks verify the effectiveness of our proposed framework, indicating an average relative improvement of over 3.02% against baselines.
Problem

Research questions and friction points this paper is trying to address.

LLM-based recommendation
preference hint
knowledge graph
sparse interactions
attribute noise
Innovation

Methods, ideas, or system contributions that make the work stand out.

Preference Hint Discovery
Knowledge Graph
Large Language Model (LLM)
Collaborative Filtering
Dual-Attention Mechanism
🔎 Similar Papers
No similar papers found.