Aligning Knowledge Graphs and Language Models for Factual Accuracy

📅 2025-07-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the poor factual consistency and hallucination tendencies of large language models (LLMs), this paper proposes ALIGNed-LLM—a novel framework that, for the first time, explicitly aligns pre-trained knowledge graph (KG) embeddings (e.g., TransE) with the LLM’s latent space. Specifically, a learnable projection layer maps KG embeddings and textual representations into a unified semantic space, enabling cross-modal semantic alignment; this alignment is end-to-end integrated into the LLM’s input layer. The method significantly enhances entity recognition, relational reasoning, and factual grounding capabilities. Extensive evaluation on multiple open-domain question-answering benchmarks demonstrates substantial improvements in factual accuracy over strong baselines—including state-of-the-art proprietary models such as GPT-4 and Claude. Furthermore, in a real-world financial Q&A application at a major European central bank, ALIGNed-LLM achieves marked gains in both answer accuracy and reliability, validating its practical efficacy in high-stakes domains.

Technology Category

Application Category

📝 Abstract
Large language models like GPT-4, Gemini, and Claude have transformed natural language processing (NLP) tasks such as question answering, dialogue generation, summarization, and so forth; yet their susceptibility to hallucination stands as one of the major challenges. Among numerous approaches to overcome this challenge, integration of Knowledge Graphs (KGs) into language models has emerged as a promising solution as it provides structured, reliable, domain-specific, and up-to-date external information to the language models. In this paper, we introduce ALIGNed-LLM, a simple yet effective approach to improve language models' factuality via a lean strategy to infuse KGs into the latent space of language models inspired by LLaVA where visual and textual information is infused. We use embeddings from a pre-trained Knowledge Graph Embedding (KGE) model, such as TransE, and a trainable projection layer to align entity and text embeddings. This alignment enables the language model to distinguish between similar entities improving factual grounding and reducing hallucination. We tested our approach on three popular questions-answering benchmark datasets alongside language models of varying sizes, showing significant improvement. Furthermore, we applied our approach to a real-world financial use case from a large central bank in Europe, which demands high accuracy and precision, demonstrating a substantial improvement of the LLM answers.
Problem

Research questions and friction points this paper is trying to address.

Aligning Knowledge Graphs with language models to enhance factual accuracy
Reducing hallucination in LLMs using structured external knowledge
Improving entity distinction in language models via KG embeddings
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrate Knowledge Graphs into language models
Align entity and text embeddings via projection
Improve factual grounding and reduce hallucination
🔎 Similar Papers
No similar papers found.