🤖 AI Summary
To address the limitations of large language models (LLMs) in knowledge-intensive question answering—namely, context-length constraints and reliance on parametric knowledge—as well as the poor generalizability and fine-tuning dependency of existing graph retrieval methods, this paper proposes LLM-KGFR, a synergistic LLM–knowledge graph framework. Its core is KGFR, a zero-shot generalizable structured retriever that requires no fine-tuning. KGFR leverages LLM-generated relation descriptions, question-role-driven entity initialization, a progressive asymmetric propagation (APP) algorithm, and a three-level interaction interface (nodes, edges, and paths) to enable controllable, interpretable, iterative reasoning. It supports efficient retrieval and multi-granularity reasoning over both large-scale and unseen knowledge graphs. Experiments demonstrate that LLM-KGFR significantly improves accuracy and efficiency across multiple knowledge graph question answering (KGQA) benchmarks, while exhibiting strong scalability and zero-shot generalization capability.
📝 Abstract
Large language models (LLMs) excel at reasoning but struggle with knowledge-intensive questions due to limited context and parametric knowledge. However, existing methods that rely on finetuned LLMs or GNN retrievers are limited by dataset-specific tuning and scalability on large or unseen graphs. We propose the LLM-KGFR collaborative framework, where an LLM works with a structured retriever, the Knowledge Graph Foundation Retriever (KGFR). KGFR encodes relations using LLM-generated descriptions and initializes entities based on their roles in the question, enabling zero-shot generalization to unseen KGs. To handle large graphs efficiently, it employs Asymmetric Progressive Propagation (APP)- a stepwise expansion that selectively limits high-degree nodes while retaining informative paths. Through node-, edge-, and path-level interfaces, the LLM iteratively requests candidate answers, supporting facts, and reasoning paths, forming a controllable reasoning loop. Experiments demonstrate that LLM-KGFR achieves strong performance while maintaining scalability and generalization, providing a practical solution for KG-augmented reasoning.