Improving Factuality in LLMs via Inference-Time Knowledge Graph Construction

📅 2025-08-31
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) suffer from factual inconsistency due to parametric memory limitations, and existing retrieval-augmented generation (RAG) methods—relying solely on unstructured textual knowledge—lack support for compositional reasoning and inconsistency detection. Method: We propose a novel paradigm of *reasoning-time dynamic knowledge graph (KG) construction and expansion*: (1) prompt-driven extraction of an initial seed KG; (2) iterative KG expansion by integrating implicit knowledge from the LLM; and (3) selective refinement via external retrieval. Contribution/Results: This approach enables structured synergy between internal model knowledge and external evidence, supporting fine-grained fact verification, compositional reasoning, and interpretable error correction. Evaluated on three factual question-answering benchmarks, our method significantly outperforms standard prompting and static KG-augmented baselines, achieving state-of-the-art performance in both factual accuracy and answer precision.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) often struggle with producing factually consistent answers due to limitations in their parametric memory. Retrieval-Augmented Generation (RAG) methods address this issue by incorporating external knowledge from trusted sources at inference time. However, such methods typically treat knowledge as unstructured text, which limits their ability to support compositional reasoning and identify factual inconsistencies. To overcome these limitations, we propose a novel framework that dynamically constructs and expands knowledge graphs (KGs) during inference, integrating both internal knowledge extracted from LLMs and external information retrieved from external sources. Our method begins by extracting a seed KG from the question via prompting, followed by iterative expansion using the LLM's latent knowledge. The graph is then selectively refined through external retrieval, enhancing factual coverage and correcting inaccuracies. We evaluate our approach on three diverse factual QA benchmarks, demonstrating consistent improvements in factual accuracy, answer precision, and interpretability over baseline prompting and static KG-augmented methods. Our findings suggest that inference-time KG construction is a promising direction for enhancing LLM factuality in a structured, interpretable, and scalable manner.
Problem

Research questions and friction points this paper is trying to address.

Addresses factual inconsistency in LLM-generated answers
Overcomes limitations of unstructured text in RAG methods
Enhances compositional reasoning and factual accuracy via dynamic KGs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic knowledge graph construction during inference
Integrates internal LLM knowledge with external retrieval
Iterative expansion and refinement for factual accuracy
🔎 Similar Papers
No similar papers found.