KGFR: A Foundation Retriever for Generalized Knowledge Graph Question Answering

📅 2025-11-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the limitations of large language models (LLMs) in knowledge-intensive question answering—namely, context-length constraints and reliance on parametric knowledge—as well as the poor generalizability and fine-tuning dependency of existing graph retrieval methods, this paper proposes LLM-KGFR, a synergistic LLM–knowledge graph framework. Its core is KGFR, a zero-shot generalizable structured retriever that requires no fine-tuning. KGFR leverages LLM-generated relation descriptions, question-role-driven entity initialization, a progressive asymmetric propagation (APP) algorithm, and a three-level interaction interface (nodes, edges, and paths) to enable controllable, interpretable, iterative reasoning. It supports efficient retrieval and multi-granularity reasoning over both large-scale and unseen knowledge graphs. Experiments demonstrate that LLM-KGFR significantly improves accuracy and efficiency across multiple knowledge graph question answering (KGQA) benchmarks, while exhibiting strong scalability and zero-shot generalization capability.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) excel at reasoning but struggle with knowledge-intensive questions due to limited context and parametric knowledge. However, existing methods that rely on finetuned LLMs or GNN retrievers are limited by dataset-specific tuning and scalability on large or unseen graphs. We propose the LLM-KGFR collaborative framework, where an LLM works with a structured retriever, the Knowledge Graph Foundation Retriever (KGFR). KGFR encodes relations using LLM-generated descriptions and initializes entities based on their roles in the question, enabling zero-shot generalization to unseen KGs. To handle large graphs efficiently, it employs Asymmetric Progressive Propagation (APP)- a stepwise expansion that selectively limits high-degree nodes while retaining informative paths. Through node-, edge-, and path-level interfaces, the LLM iteratively requests candidate answers, supporting facts, and reasoning paths, forming a controllable reasoning loop. Experiments demonstrate that LLM-KGFR achieves strong performance while maintaining scalability and generalization, providing a practical solution for KG-augmented reasoning.
Problem

Research questions and friction points this paper is trying to address.

LLMs struggle with knowledge-intensive questions due to limited context
Existing methods lack scalability and generalization on unseen knowledge graphs
Proposed framework enables zero-shot generalization and efficient large graph handling
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM-KGFR framework enables zero-shot generalization
APP method limits high-degree nodes selectively
Node-edge-path interfaces form controllable reasoning loop
🔎 Similar Papers
No similar papers found.
Yuanning Cui
Yuanning Cui
Nanjing University of Information Science and Technology
Graph Machine LearningKnowledge GraphGraph Foundation ModelLLMs
Zequn Sun
Zequn Sun
Nanjing University
Knowledge GraphLarge Language Model
W
Wei Hu
State Key Laboratory for Novel Software Technology, Nanjing University, Nanjing 210023, China, and also with the National Institute of Healthcare Data Science, Nanjing University, Nanjing 210093, China
Z
Zhangjie Fu
School of Computer Science, Nanjing University of Information Science and Technology, Nanjing 210044, China, and also with the Engineering Research Center of Digital Forensics, Ministry of Education, Nanjing University of Information Science and Technology, Nanjing 210044, China