Knowledge Graph Large Language Model (KG-LLM) for Link Prediction

📅 2024-03-12
🏛️ Asian Conference on Machine Learning
📈 Citations: 32
Influential: 0
📄 PDF
🤖 AI Summary
Multi-hop link prediction in knowledge graphs (KGs) requires modeling long-range dependencies and implicit reasoning paths, yet existing embedding-based approaches rely heavily on predefined relation paths and lack zero-shot generalization. Method: This paper proposes KG-LLM—a novel framework that introduces the first learnable mapping from KG structural elements (entities and relations) to natural language prompts, encoding them as semantically grounded textual inputs. It leverages instruction-tuned large language models (LLMs), including Flan-T5, LLaMA2, and Gemma, for end-to-end multi-hop reasoning. Contribution/Results: KG-LLM eliminates dependence on handcrafted path templates, enabling zero-shot multi-hop inference. Experiments demonstrate substantial improvements in multi-hop prediction accuracy and significantly enhanced generalization to unseen relations, novel entities, and cross-domain settings. These results validate the effectiveness and robustness of semantic graph reasoning via LLMs.

Technology Category

Application Category

📝 Abstract
The task of multi-hop link prediction within knowledge graphs (KGs) stands as a challenge in the field of knowledge graph analysis, as it requires the model to reason through and understand all intermediate connections before making a prediction. In this paper, we introduce the Knowledge Graph Large Language Model (KG-LLM), a novel framework that leverages large language models (LLMs) for knowledge graph tasks. We first convert structured knowledge graph data into natural language and then use these natural language prompts to fine-tune LLMs to enhance multi-hop link prediction in KGs. By converting the KG to natural language prompts, our framework is designed to learn the latent representations of entities and their interrelations. To show the efficacy of the KG-LLM Framework, we fine-tune three leading LLMs within this framework, including Flan-T5, LLaMa2 and Gemma. Further, we explore the framework's potential to provide LLMs with zero-shot capabilities for handling previously unseen prompts. Experimental results show that KG-LLM significantly improves the models' generalization capabilities, leading to more accurate predictions in unfamiliar scenarios.
Problem

Research questions and friction points this paper is trying to address.

Enhancing multi-hop link prediction in knowledge graphs using LLMs
Converting structured KG data into natural language prompts
Improving generalization for unseen scenarios via zero-shot learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Converts KG data to natural language prompts
Fine-tunes LLMs for multi-hop link prediction
Enables zero-shot learning for unseen prompts
🔎 Similar Papers
No similar papers found.