🤖 AI Summary
Existing knowledge graph (KG)–large language model (LLM) integration methods suffer from three key bottlenecks: (1) semantic granularity mismatch between KGs and LLMs, (2) redundant retrieval operations, and (3) omission of critical facts—leading to inaccurate retrieval, computational waste, and delayed LLM knowledge updating. To address these, we propose Fine-grained Stateful Knowledge Exploration (FSKE), a novel paradigm that enables node-level semantic alignment and incremental, precise KG retrieval via dynamic semantic parsing and stateful mapping modeling—without requiring prior assumptions about graph structure and supporting online mapping updates. We further design an LLM-coordinated reasoning framework that jointly optimizes retrieval completeness and computational efficiency. Extensive experiments on multiple benchmark datasets demonstrate that FSKE significantly outperforms state-of-the-art methods: it improves knowledge retrieval accuracy by 12.6%–23.4% and reduces average LLM invocation count by 58.3%.
📝 Abstract
Large Language Models (LLMs) have shown impressive capabilities, yet updating their knowledge remains a significant challenge, often leading to outdated or inaccurate responses. A proposed solution is the integration of external knowledge bases, such as knowledge graphs, with LLMs. Most existing methods use a paradigm that treats the question as the objective, with relevant knowledge being incrementally retrieved from the knowledge graph. However, this strategy frequently experiences an mismatch in the granularity of knowledge between the target question and the entities and relations being retrieved. As a result, the information in the question cannot precisely correspond to the retrieved knowledge. This may cause redundant exploration or omission of vital knowledge, thereby leading to enhanced computational consumption and reduced retrieval accuracy. In this paper, we propose a novel paradigm of fine-grained stateful knowledge exploration, which addresses the `information granularity mismatch' issue. We extract fine-grained information from questions and explore the semantic mapping between this information and the knowledge in graph. By dynamically updating the mapping records, we avoid redundant exploration and ensure no pertinent information is overlooked, thereby reducing computational overhead and improving the accuracy of knowledge exploration. The use of fine-grained information also eliminates the need for a priori knowledge, a common requirement in existing methods. Experiments on multiple datasets revealed that our paradigm surpasses current advanced methods in knowledge retrieval while significantly reducing the average number of LLM invocations.