🤖 AI Summary
This work addresses the challenge of unreliable triples in knowledge graphs and the limitations of existing classification methods in modeling semantic interactions and learning expressive representations. To this end, we propose a decoupled context encoding approach based on a disentangled attention mechanism that effectively captures interactions among the components of a triple. Furthermore, we introduce a semantic-aware hierarchical contrastive learning objective that integrates both local and global semantic information to enhance representation quality. By incorporating natural language description embeddings, our model achieves significant improvements in classification accuracy, outperforming state-of-the-art methods by 5.9% on FB15k-237 and 3.4% on YAGO3-10.
📝 Abstract
Knowledge Graphs~(KGs) often suffer from unreliable knowledge, which restricts their utility. Triple Classification~(TC) aims to determine the validity of triples from KGs. Recently, text-based methods learn entity and relation representations from natural language descriptions, significantly improving the generalization capabilities of TC models and setting new benchmarks in performance. However, there are still two critical challenges. First, existing methods often ignore the effective semantic interaction among different KG components. Second, most approaches adopt single binary classification training objective, leading to insufficient semantic representation learning. To address these challenges, we propose \textbf{SASA}, a novel framework designed to enhance TC models via separated attention mechanism and semantic-aware contrastive learning~(CL). Specifically, we first propose separated attention mechanism to encode triples into decoupled contextual representations and then fuse them through a more effective interactive way. Then, we introduce semantic-aware hierarchical CL as auxiliary training objective to guide models in improving their discriminative capabilities and achieving sufficient semantic learning, considering both local level and global level CL. Experimental results across two benchmark datasets demonstrate that SASA significantly outperforms state-of-the-art methods. In terms of accuracy, we advance the state-of-the-art by +5.9\% on FB15k-237 and +3.4\% on YAGO3-10.