π€ AI Summary
To address the challenges of ineffective graph-structure integration and high inference uncertainty in large language models (LLMs) for knowledge graph completion (KGC), this paper proposes a joint LLMβgraph encoder modeling framework. Methodologically, it introduces: (1) an improved Graph Transformer (iGT) that jointly captures local and global structural dependencies while preserving language modeling capabilities; (2) a subgraph multi-class classification objective enabling discriminative, full-entity joint prediction; and (3) a KG-specific three-token linguistic prompting mechanism to ensure deterministic generation of structured facts. Evaluated on multiple standard benchmarks, the approach achieves significant improvements over state-of-the-art methods, with enhanced inference determinism, higher training efficiency, and 100% entity coverage.
π Abstract
Knowledge Graph Completion (KGC), which aims to infer missing or incomplete facts, is a crucial task for KGs. However, integrating the vital structural information of KGs into Large Language Models (LLMs) and outputting predictions deterministically remains challenging. To address this, we propose a new method called GLTW, which encodes the structural information of KGs and merges it with LLMs to enhance KGC performance. Specifically, we introduce an improved Graph Transformer (iGT) that effectively encodes subgraphs with both local and global structural information and inherits the characteristics of language model, bypassing training from scratch. Also, we develop a subgraph-based multi-classification training objective, using all entities within KG as classification objects, to boost learning efficiency.Importantly, we combine iGT with an LLM that takes KG language prompts as input.Our extensive experiments on various KG datasets show that GLTW achieves significant performance gains compared to SOTA baselines.