Semantic-Condition Tuning: Fusing Graph Context with Large Language Models for Knowledge Graph Completion

📅 2025-10-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address semantic disintegration caused by shallow concatenation of knowledge embeddings and textual inputs, as well as excessive inference overhead in large language models (LLMs) for knowledge graph (KG) completion, this paper proposes the Semantic Conditional Tuning (SCT) framework. SCT introduces a semantic graph module to extract context-aware features from local graph structures and designs a conditional adaptive fusion module that modulates LLM text embeddings at the feature level via parameterized projectors, enabling deep integration of graph semantics and linguistic representations. The method significantly enhances relational semantic understanding and inference robustness. Empirical evaluation on multiple KG completion benchmarks demonstrates that SCT consistently outperforms mainstream baselines—including prefix tuning—validating its effectiveness and state-of-the-art performance in knowledge-intensive tasks.

Technology Category

Application Category

📝 Abstract
Fusing Knowledge Graphs with Large Language Models is crucial for knowledge-intensive tasks like knowledge graph completion. The prevailing paradigm, prefix-tuning, simply concatenates knowledge embeddings with text inputs. However, this shallow fusion overlooks the rich relational semantics within KGs and imposes a significant implicit reasoning burden on the LLM to correlate the prefix with the text. To address these, we propose Semantic-condition Tuning (SCT), a new knowledge injection paradigm comprising two key modules. First, a Semantic Graph Module employs a Graph Neural Network to extract a context-aware semantic condition from the local graph neighborhood, guided by knowledge-enhanced relations. Subsequently, this condition is passed to a Condition-Adaptive Fusion Module, which, in turn, adaptively modulates the textual embedding via two parameterized projectors, enabling a deep, feature-wise, and knowledge-aware interaction. The resulting pre-fused embedding is then fed into the LLM for fine-tuning. Extensive experiments on knowledge graph benchmarks demonstrate that SCT significantly outperforms prefix-tuning and other strong baselines. Our analysis confirms that by modulating the input representation with semantic graph context before LLM inference, SCT provides a more direct and potent signal, enabling more accurate and robust knowledge reasoning.
Problem

Research questions and friction points this paper is trying to address.

Enhancing knowledge graph completion by integrating graph context with language models
Overcoming shallow fusion limitations in existing knowledge injection methods
Enabling deep knowledge-aware interaction through semantic-conditioned modulation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Graph Neural Network extracts context-aware semantic conditions
Condition-Adaptive Fusion modulates textual embeddings adaptively
Semantic-condition tuning enables deep knowledge-aware LLM interaction
🔎 Similar Papers
No similar papers found.
R
Ruitong Liu
Dalian University of Technology
Yan Wen
Yan Wen
Undergraduate, Fudan University
Machine Learning
T
Te Sun
Dalian University of Technology
Y
Yunjia Wu
Dalian University of Technology
P
Pingyang Huang
Dalian University of Technology
Z
Zihang Yu
Dalian University of Technology
S
Siyuan Li
Dalian University of Technology