🤖 AI Summary
This paper addresses the insufficient integration of structural, textual, and logical rule information in knowledge graph embedding. Methodologically, it proposes a collaborative embedding framework that: (1) employs Graph Convolutional Networks (GCNs) to model topological structure; (2) introduces a confidence scoring mechanism grounded in first-order logic rules to explicitly encode rule-based constraints; and (3) jointly encodes literal information—combining word embeddings and pre-trained language models—to construct a unified structural–textual–logical representation. Extensive link prediction experiments on FB15k-237 and WN18RR demonstrate that the proposed approach significantly outperforms state-of-the-art baselines. The results validate that logic-guided confidence modeling and joint optimization of heterogeneous information sources substantially enhance the quality of entity and relation embeddings.
📝 Abstract
Recent studies focus on embedding learning over knowledge graphs, which map entities and relations in knowledge graphs into low-dimensional vector spaces. While existing models mainly consider the aspect of graph structure, there exists a wealth of contextual and literal information that can be utilized for more effective embedding learning. This paper introduces a novel model that incorporates both contextual and literal information into entity and relation embeddings by utilizing graph convolutional networks. Specifically, for contextual information, we assess its significance through confidence and relatedness metrics. In addition, a unique rule-based method is developed to calculate the confidence metric, and the relatedness metric is derived from the literal information's representations. We validate our model performance with thorough experiments on two established benchmark datasets.