Boosting Knowledge Graph-based Recommendations through Confidence-Aware Augmentation with Large Language Models

📅 2025-02-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address critical challenges in knowledge graph (KG)-based recommendation—including high graph noise, difficulty in KG updating, and hallucination interference from large language models (LLMs)—this paper proposes a trust-enhanced KG-LLM collaborative recommendation framework. Methodologically, it introduces a novel confidence-aware subgraph enhancement mechanism, integrating confidence-guided message passing with dual-view contrastive learning, and incorporates an explainability generation module. Technically, the framework synergizes LLM-based semantic understanding, confidence modeling, graph neural networks, and contrastive learning to achieve dynamic KG denoising and semantic alignment. Extensive experiments on multiple public benchmarks demonstrate significant improvements in both recommendation accuracy and explainability, while effectively mitigating KG noise and LLM hallucinations. This work establishes a new paradigm for trustworthy, KG-augmented recommendation systems.

Technology Category

Application Category

📝 Abstract
Knowledge Graph-based recommendations have gained significant attention due to their ability to leverage rich semantic relationships. However, constructing and maintaining Knowledge Graphs (KGs) is resource-intensive, and the accuracy of KGs can suffer from noisy, outdated, or irrelevant triplets. Recent advancements in Large Language Models (LLMs) offer a promising way to improve the quality and relevance of KGs for recommendation tasks. Despite this, integrating LLMs into KG-based systems presents challenges, such as efficiently augmenting KGs, addressing hallucinations, and developing effective joint learning methods. In this paper, we propose the Confidence-aware KG-based Recommendation Framework with LLM Augmentation (CKG-LLMA), a novel framework that combines KGs and LLMs for recommendation task. The framework includes: (1) an LLM-based subgraph augmenter for enriching KGs with high-quality information, (2) a confidence-aware message propagation mechanism to filter noisy triplets, and (3) a dual-view contrastive learning method to integrate user-item interactions and KG data. Additionally, we employ a confidence-aware explanation generation process to guide LLMs in producing realistic explanations for recommendations. Finally, extensive experiments demonstrate the effectiveness of CKG-LLMA across multiple public datasets.
Problem

Research questions and friction points this paper is trying to address.

Enhancing Knowledge Graph accuracy
Integrating Large Language Models efficiently
Improving recommendation quality with confidence-awareness
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM-based subgraph augmentation
Confidence-aware message propagation
Dual-view contrastive learning method
🔎 Similar Papers
No similar papers found.