LightPROF: A Lightweight Reasoning Framework for Large Language Model on Knowledge Graph

📅 2025-04-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) suffer from knowledge staleness and unreliable reasoning; existing knowledge graph (KG)-enhanced approaches often neglect KG structural information, rely on closed-source or parameter-heavy models, and incur high computational costs. Method: We propose LightPROF, a lightweight KG-augmented reasoning framework that pioneers aligning KG structural information into the LLM’s embedding space. It introduces a minimal Knowledge Adapter—requiring only lightweight fine-tuning—and is compatible with arbitrary open-source small-language models (e.g., Phi-3, Qwen1.5-0.5B). LightPROF integrates KG-aware retrieval, structure-sensitive embedding mapping, and prompt optimization. Contribution/Results: It reduces input token count by 68% and inference latency by 3.2×. On two KG question answering (KGQA) benchmarks, a 0.5B model augmented with LightPROF matches the performance of a 7B closed-source model, demonstrating the efficacy and practicality of efficient structural-knowledge injection into compact LLMs.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) have impressive capabilities in text understanding and zero-shot reasoning. However, delays in knowledge updates may cause them to reason incorrectly or produce harmful results. Knowledge Graphs (KGs) provide rich and reliable contextual information for the reasoning process of LLMs by structurally organizing and connecting a wide range of entities and relations. Existing KG-based LLM reasoning methods only inject KGs' knowledge into prompts in a textual form, ignoring its structural information. Moreover, they mostly rely on close-source models or open-source models with large parameters, which poses challenges to high resource consumption. To address this, we propose a novel Lightweight and efficient Prompt learning-ReasOning Framework for KGQA (LightPROF), which leverages the full potential of LLMs to tackle complex reasoning tasks in a parameter-efficient manner. Specifically, LightPROF follows a"Retrieve-Embed-Reason process", first accurately, and stably retrieving the corresponding reasoning graph from the KG through retrieval module. Next, through a Transformer-based Knowledge Adapter, it finely extracts and integrates factual and structural information from the KG, then maps this information to the LLM's token embedding space, creating an LLM-friendly prompt to be used by the LLM for the final reasoning. Additionally, LightPROF only requires training Knowledge Adapter and can be compatible with any open-source LLM. Extensive experiments on two public KGQA benchmarks demonstrate that LightPROF achieves superior performance with small-scale LLMs. Furthermore, LightPROF shows significant advantages in terms of input token count and reasoning time.
Problem

Research questions and friction points this paper is trying to address.

Addresses incorrect reasoning in LLMs due to outdated knowledge
Integrates KG structural info into LLMs efficiently
Reduces resource use with lightweight framework for KGQA
Innovation

Methods, ideas, or system contributions that make the work stand out.

Lightweight reasoning framework for KGQA
Retrieve-Embed-Reason process with Knowledge Adapter
Parameter-efficient training with open-source LLMs
🔎 Similar Papers
No similar papers found.
T
Tu Ao
Beijing University of Posts and Telecommunications, China
Y
Yanhua Yu
Beijing University of Posts and Telecommunications, China
Y
Yuling Wang
Hangzhou Dianzi University, China
Y
Yang Deng
Singapore Management University, Singapore
Zirui Guo
Zirui Guo
Beijing University of Posts and Telecommunications
Contrastive learningGraph representation learningRecommendation
Liang Pang
Liang Pang
Associate Professor, Institute of Computing Technology, Chinese Academy of Sciences
Large Language ModelSemantic MatchingQuestion AnsweringText MatchingText Generation
Pinghui Wang
Pinghui Wang
Xi'an Jiaotong University
T
Tat-Seng Chua
National University of Singapore, Singapore
X
Xiao Zhang
Beijing University of Posts and Telecommunications, China
Z
Zhen Cai
Beijing University of Posts and Telecommunications, China