🤖 AI Summary
Static knowledge graphs suffer from poor timeliness and weak adaptability to dynamic environments, while zero-/few-shot methods exhibit insufficient stability across multiple update cycles and incomplete coverage of critical facts. To address these challenges, this paper proposes a dynamic temporal knowledge graph construction and continual updating framework tailored for unstructured text. Our approach innovatively introduces an “atomic fact” segmentation mechanism and dual temporal modeling—explicitly distinguishing observation time from valid time—combined with large language model–driven few-shot fact extraction, parallelized knowledge fusion, and a scalable pipeline architecture. Experimental results demonstrate that our method improves knowledge coverage by 18%, enhances multi-round consistency by 17%, and reduces end-to-end latency by over 90% compared to baseline approaches. These gains significantly advance the graph’s real-time responsiveness, factual completeness, and system scalability.
📝 Abstract
In today's rapidly expanding data landscape, knowledge extraction from unstructured text is vital for real-time analytics, temporal inference, and dynamic memory frameworks. However, traditional static knowledge graph (KG) construction often overlooks the dynamic and time-sensitive nature of real-world data, limiting adaptability to continuous changes. Moreover, recent zero- or few-shot approaches that avoid domain-specific fine-tuning or reliance on prebuilt ontologies often suffer from instability across multiple runs, as well as incomplete coverage of key facts. To address these challenges, we introduce ATOM (AdapTive and OptiMized), a few-shot and scalable approach that builds and continuously updates Temporal Knowledge Graphs (TKGs) from unstructured texts. ATOM splits input documents into minimal, self-contained "atomic" facts, improving extraction exhaustivity and stability. Then, it constructs atomic TKGs from these facts while employing a dual-time modeling that distinguishes when information is observed from when it is valid. The resulting atomic TKGs are subsequently merged in parallel. Empirical evaluations demonstrate that ATOM achieves ~18% higher exhaustivity, ~17% better stability, and over 90% latency reduction compared to baseline methods, demonstrating a strong scalability potential for dynamic TKG construction.