🤖 AI Summary
Existing large language models (LLMs) for knowledge graph construction suffer from a local text-centric perspective, hindering cross-document information integration and implicit relation discovery. To address this, we propose Graphusion: a zero-shot, end-to-end framework that leverages seed entities to guide LLM-based triple extraction and introduces a novel global fusion module—supporting entity consolidation, conflict resolution, and implicit relation discovery—thereby transcending conventional extraction paradigms. We formally define the zero-shot knowledge graph completion (KGC) task without fine-tuning and present TutorQA, the first expert-validated QA benchmark for education-domain KGC. Experiments show Graphusion achieves triple extraction scores of 2.92/3 (entities) and 2.37/3 (relations); improves subgraph completion accuracy by 9.2%; and significantly enhances downstream question answering performance.
📝 Abstract
Knowledge Graphs (KGs) are crucial in the field of artificial intelligence and are widely used in downstream tasks, such as question-answering (QA). The construction of KGs typically requires significant effort from domain experts. Large Language Models (LLMs) have recently been used for Knowledge Graph Construction (KGC). However, most existing approaches focus on a local perspective, extracting knowledge triplets from individual sentences or documents, missing a fusion process to combine the knowledge in a global KG. This work introduces Graphusion, a zero-shot KGC framework from free text. It contains three steps: in Step 1, we extract a list of seed entities using topic modeling to guide the final KG includes the most relevant entities; in Step 2, we conduct candidate triplet extraction using LLMs; in Step 3, we design the novel fusion module that provides a global view of the extracted knowledge, incorporating entity merging, conflict resolution, and novel triplet discovery. Results show that Graphusion achieves scores of 2.92 and 2.37 out of 3 for entity extraction and relation recognition, respectively. Moreover, we showcase how Graphusion could be applied to the Natural Language Processing (NLP) domain and validate it in an educational scenario. Specifically, we introduce TutorQA, a new expert-verified benchmark for QA, comprising six tasks and a total of 1,200 QA pairs. Using the Graphusion-constructed KG, we achieve a significant improvement on the benchmark, for example, a 9.2% accuracy improvement on sub-graph completion.