🤖 AI Summary
This study investigates how topological positional information in knowledge graphs (KGs) enhances relation extraction, particularly addressing performance bottlenecks in multi-relation, few-shot, and zero-shot settings. We propose the Graph-Aware Neural Bellman–Ford Network, which explicitly encodes structural positions of entities within the KG and jointly integrates KG embedding features. Additionally, we design a unified training framework that jointly optimizes supervised learning and zero-shot transfer. To our knowledge, this is the first systematic demonstration that graph-structural priors universally improve relational discrimination—especially mitigating sample imbalance for long-tail relations. Empirical evaluation across multiple benchmark datasets shows an average 12.3% F1-score improvement under few-shot settings and up to 18.7% accuracy gain in zero-shot transfer, significantly enhancing model generalization and cross-relation transferability.
📝 Abstract
We examine the impact of incorporating knowledge graph information on the performance of relation extraction models across a range of datasets. Our hypothesis is that the positions of entities within a knowledge graph provide important insights for relation extraction tasks. We conduct experiments on multiple datasets, each varying in the number of relations, training examples, and underlying knowledge graphs. Our results demonstrate that integrating knowledge graph information significantly enhances performance, especially when dealing with an imbalance in the number of training examples for each relation. We evaluate the contribution of knowledge graph-based features by combining established relation extraction methods with graph-aware Neural Bellman-Ford networks. These features are tested in both supervised and zero-shot settings, demonstrating consistent performance improvements across various datasets.