π€ AI Summary
To address the weak symbolic reasoning capability of large language models (LLMs) stemming from insufficient structured knowledge support, this paper proposes KG-Injectβa lightweight integration method that directly injects knowledge graph embeddings (KGEs) as learnable tokens into the LLMβs input layer. KG-Inject achieves, for the first time, deep fusion between KGEs and LLM input encoding without modifying model parameters or requiring fine-tuning, thereby preserving knowledge structure fidelity while maintaining computational efficiency. Its model-agnostic design enables plug-and-play compatibility with arbitrary open- or closed-source LLMs. Extensive experiments on both synthetic and real-world datasets demonstrate that KG-Inject significantly improves logical reasoning accuracy, outperforming state-of-the-art knowledge-augmented methods in both overall precision and inference efficiency.
π Abstract
Integrating structured knowledge from Knowledge Graphs (KGs) into Large Language Models (LLMs) remains a key challenge for symbolic reasoning. Existing methods mainly rely on prompt engineering or fine-tuning, which lose structural fidelity or incur high computational costs. Building on recent encoding techniques which integrate graph embeddings within the LLM input as tokens, we extend this paradigm to the KG domain by leveraging Knowledge Graph Embedding (KGE) models, thus enabling graph-aware reasoning. Our approach is model-agnostic, resource-efficient, and compatible with any LLMs. Extensive experimentation on synthetic and real-world datasets shows that our method improves reasoning performance over established baselines, further achieving the best trade-off in terms of accuracy and efficiency against state-of-the-art LLMs.