Enhancing Large Language Models (LLMs) for Telecom using Dynamic Knowledge Graphs and Explainable Retrieval-Augmented Generation

📅 2026-02-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the susceptibility of general-purpose large language models (LLMs) to hallucination in the telecommunications domain, where specialized terminology, dynamically evolving standards, and inherent complexity often lead to unreliable outputs. To mitigate this issue, the authors propose KG-RAG, a novel framework that uniquely integrates dynamic knowledge graphs with interpretable retrieval-augmented generation (RAG). By injecting structured domain knowledge and enabling real-time factual retrieval, KG-RAG facilitates high-fidelity, traceable reasoning. Experimental results on benchmark tasks demonstrate that the approach significantly enhances both accuracy and regulatory compliance, outperforming standard RAG and standalone LLMs by average accuracy gains of 14.3% and 21.6%, respectively, while effectively suppressing hallucinations.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) have shown strong potential across a variety of tasks, but their application in the telecom field remains challenging due to domain complexity, evolving standards, and specialized terminology. Therefore, general-domain LLMs may struggle to provide accurate and reliable outputs in this context, leading to increased hallucinations and reduced utility in telecom operations.To address these limitations, this work introduces KG-RAG-a novel framework that integrates knowledge graphs (KGs) with retrieval-augmented generation (RAG) to enhance LLMs for telecom-specific tasks. In particular, the KG provides a structured representation of domain knowledge derived from telecom standards and technical documents, while RAG enables dynamic retrieval of relevant facts to ground the model's outputs. Such a combination improves factual accuracy, reduces hallucination, and ensures compliance with telecom specifications.Experimental results across benchmark datasets demonstrate that KG-RAG outperforms both LLM-only and standard RAG baselines, e.g., KG-RAG achieves an average accuracy improvement of 14.3% over RAG and 21.6% over LLM-only models. These results highlight KG-RAG's effectiveness in producing accurate, reliable, and explainable outputs in complex telecom scenarios.
Problem

Research questions and friction points this paper is trying to address.

Large Language Models
Telecom
Hallucination
Domain Knowledge
Factual Accuracy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Knowledge Graph
Retrieval-Augmented Generation
Explainable AI
Domain-Specific LLM
Telecom Knowledge Integration
🔎 Similar Papers
No similar papers found.
D
Dun Yuan
School of Computer Science, McGill University, Montreal, QC H3A 0E9, Canada
Hao Zhou
Hao Zhou
Samsung Research America /Mcgill University
Machine learning6G networksLarge Language Models
X
Xue Liu
School of Computer Science, McGill University, Montreal, QC H3A 0E9, Canada
Hao Chen
Hao Chen
Samsung Research America
AI for wireless communicationSensingSmart HomeLLM
Y
Yan Xin
Standards and Mobility Innovation Lab, Samsung Research America, Plano, Texas, TX 75023, USA
Jianzhong (Charlie) Zhang
Jianzhong (Charlie) Zhang
Samsung
5GCellular CommunicationsMIMOLTEWiMAX