SLiNT: Structure-aware Language Model with Injection and Contrastive Training for Knowledge Graph Completion

📅 2025-09-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the weak structural awareness of large language models (LLMs) and their poor generalization in sparse and zero-shot settings for knowledge graph (KG) link prediction, this paper proposes a lightweight, modular framework that injects graph-structural signals into frozen LLMs. The method comprises three core innovations: (1) Structural Graph Neighborhood Enhancement (SGNE) to mitigate structural sparsity; (2) Dynamic Hard-example Contrastive Learning (DHCL) to strengthen semantic discriminability; and (3) Gradient-Decoupled Dual Injection (GDDI), enabling joint optimization of structural and semantic representations. Leveraging LoRA-based adaptation, the framework avoids full LLM fine-tuning. Extensive experiments on WN18RR and FB15k-237 demonstrate state-of-the-art or competitive performance, validating the effectiveness and generalizability of structure-aware training for scalable KG completion.

Technology Category

Application Category

📝 Abstract
Link prediction in knowledge graphs requires integrating structural information and semantic context to infer missing entities. While large language models offer strong generative reasoning capabilities, their limited exploitation of structural signals often results in structural sparsity and semantic ambiguity, especially under incomplete or zero-shot settings. To address these challenges, we propose SLiNT (Structure-aware Language model with Injection and coNtrastive Training), a modular framework that injects knowledge-graph-derived structural context into a frozen LLM backbone with lightweight LoRA-based adaptation for robust link prediction. Specifically, Structure-Guided Neighborhood Enhancement (SGNE) retrieves pseudo-neighbors to enrich sparse entities and mitigate missing context; Dynamic Hard Contrastive Learning (DHCL) introduces fine-grained supervision by interpolating hard positives and negatives to resolve entity-level ambiguity; and Gradient-Decoupled Dual Injection (GDDI) performs token-level structure-aware intervention while preserving the core LLM parameters. Experiments on WN18RR and FB15k-237 show that SLiNT achieves superior or competitive performance compared with both embedding-based and generation-based baselines, demonstrating the effectiveness of structure-aware representation learning for scalable knowledge graph completion.
Problem

Research questions and friction points this paper is trying to address.

Enhancing structural context integration for link prediction
Mitigating structural sparsity and semantic ambiguity issues
Improving zero-shot and incomplete knowledge graph completion
Innovation

Methods, ideas, or system contributions that make the work stand out.

Structure injection via lightweight LoRA adaptation
Dynamic contrastive learning with hard examples
Gradient-decoupled token-level structure intervention
🔎 Similar Papers
No similar papers found.
M
Mengxue Yang
University of Chinese Academy of Sciences, Beijing, China
Chun Yang
Chun Yang
School of Aerospace, Tsinghua University
MechanobiologyCellular biology
J
Jiaqi Zhu
University of Chinese Academy of Sciences, Beijing, China; Institute of Software, Chinese Academy of Sciences, Beijing, China
J
Jiafan Li
University of Chinese Academy of Sciences, Beijing, China; Institute of Software, Chinese Academy of Sciences, Beijing, China
J
Jingqi Zhang
University of Chinese Academy of Sciences, Beijing, China
Yuyang Li
Yuyang Li
Institute for AI, Peking University
Robotic ManipulationTactile SensingHuman-Object Interaction
Y
Ying Li
University of Chinese Academy of Sciences, Beijing, China