🤖 AI Summary
Knowledge graphs (KGs) suffer from conceptual ambiguity across static, dynamic, temporal, and event-based variants, alongside limitations in KG construction and reasoning—particularly in handling multimodal data and temporal dynamics. Method: We propose a unified framework integrating symbolic rules with neural methods for knowledge extraction and reasoning, coupled with cross-modal alignment and temporal modeling. We establish the first lifecycle-spanning theoretical framework for KG evolution and introduce a novel KG–large language model (LLM) co-engineering paradigm. Contribution/Results: Our work rigorously delineates the conceptual boundaries and technical paradigms of four KG types; enables robust, scalable KG construction and temporal reasoning; and demonstrates empirical validity and extensibility through a financial risk identification application. The framework advances foundational KG theory and provides a systematic methodology for KG–LLM joint modeling, bridging symbolic and neural AI in knowledge-intensive domains.
📝 Abstract
Knowledge graphs (KGs) are structured representations of diversified knowledge. They are widely used in various intelligent applications. In this article, we provide a comprehensive survey on the evolution of various types of knowledge graphs (i.e., static KGs, dynamic KGs, temporal KGs, and event KGs) and techniques for knowledge extraction and reasoning. Furthermore, we introduce the practical applications of different types of KGs, including a case study in financial analysis. Finally, we propose our perspective on the future directions of knowledge engineering, including the potential of combining the power of knowledge graphs and large language models (LLMs), and the evolution of knowledge extraction, reasoning, and representation.