Large Language Models for Knowledge Graph Embedding Techniques, Methods, and Challenges: A Survey

📅 2025-01-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing research on integrating large language models (LLMs) with knowledge graph embedding (KGE) lacks a unified conceptual framework, hindering systematic comparison and adoption. Method: We propose the first task-oriented taxonomy for the LLM-KGE intersection, categorizing approaches by application paradigms—including multimodal KGE and open-domain KGE—and specifying model adaptation strategies for each. Through comprehensive literature analysis and methodological abstraction, we construct a structured survey table covering mainstream techniques, their application scenarios, technical characteristics, and publicly available implementations. We further introduce an extensible evaluation benchmark grounded in standardized metrics and datasets. Contribution/Results: This work provides a theoretically grounded, empirically informed foundation for LLM-KGE integration, delivering a practical guidance framework, reproducible resources, and concrete directions for future research—thereby advancing both methodological rigor and real-world applicability in neuro-symbolic AI.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) have attracted a lot of attention in various fields due to their superior performance, aiming to train hundreds of millions or more parameters on large amounts of text data to understand and generate natural language. As the superior performance of LLMs becomes apparent, they are increasingly being applied to knowledge graph embedding (KGE) related tasks to improve the processing results. As a deep learning model in the field of Natural Language Processing (NLP), it learns a large amount of textual data to predict the next word or generate content related to a given text. However, LLMs have recently been invoked to varying degrees in different types of KGE related scenarios such as multi-modal KGE and open KGE according to their task characteristics. In this paper, we investigate a wide range of approaches for performing LLMs-related tasks in different types of KGE scenarios. To better compare the various approaches, we summarize each KGE scenario in a classification. In addition to the categorization methods, we provide a tabular overview of the methods and their source code links for a more direct comparison. In the article we also discuss the applications in which the methods are mainly used and suggest several forward-looking directions for the development of this new research area.
Problem

Research questions and friction points this paper is trying to address.

Large Language Models
Knowledge Graph Embeddings
Enhancement Techniques
Innovation

Methods, ideas, or system contributions that make the work stand out.

Large Language Models
Knowledge Graph Embeddings
Multimodal and Open Scenarios
🔎 Similar Papers
No similar papers found.