🤖 AI Summary
This paper addresses the mutual enhancement between knowledge graphs (KGs) and large language models (LLMs), aiming to improve LLMs’ factual accuracy and reasoning capabilities while advancing KGs’ automated construction and evolution. Methodologically, it integrates symbolic logic, graph neural networks, prompt engineering, retrieval-augmented generation (RAG), KG embedding, and LLM fine-tuning into a unified framework. It is the first survey to systematically evaluate KG-LLM integration paradigms along three critical dimensions: scalability, computational efficiency, and data quality. The work proposes a neuro-symbolic fusion architecture, a dynamic KG updating mechanism, a trustworthy data governance framework, and an ethics-aligned alignment pathway—thereby clarifying the technical landscape and identifying key bottlenecks. The contributions provide both theoretical foundations and practical guidelines for developing next-generation knowledge-intelligent systems that are highly reliable, adaptive, and evolvable. (149 words)
📝 Abstract
Integrating structured knowledge from Knowledge Graphs (KGs) into Large Language Models (LLMs) enhances factual grounding and reasoning capabilities. This survey paper systematically examines the synergy between KGs and LLMs, categorizing existing approaches into two main groups: KG-enhanced LLMs, which improve reasoning, reduce hallucinations, and enable complex question answering; and LLM-augmented KGs, which facilitate KG construction, completion, and querying. Through comprehensive analysis, we identify critical gaps and highlight the mutual benefits of structured knowledge integration. Compared to existing surveys, our study uniquely emphasizes scalability, computational efficiency, and data quality. Finally, we propose future research directions, including neuro-symbolic integration, dynamic KG updating, data reliability, and ethical considerations, paving the way for intelligent systems capable of managing more complex real-world knowledge tasks.