🤖 AI Summary
Entity alignment (EA) suffers from limited interpretability due to conventional embedding methods that rely solely on distance-based similarity metrics, neglecting logical justifications for alignments. To address this, we propose Align-Subgraph—a novel modeling framework—and the Alignment-aware Subgraph Graph Neural Network (ASGNN), the first approach to explicitly encode cross-KG alignment subgraphs as carriers of logical rules. ASGNN introduces a node-level multimodal attention mechanism and multimodal-enhanced anchor nodes, enabling rule-driven, interpretable alignment. Our method achieves state-of-the-art performance on both standard EA and multimodal EA (MMEA) benchmarks, significantly outperforming existing embedding-based models. Crucially, it maintains high accuracy while providing transparent, step-by-step alignment reasoning paths—thereby unifying strong interpretability with robust generalization across diverse KGs.
📝 Abstract
Entity alignment (EA) aims to identify entities across different knowledge graphs that represent the same real-world objects. Recent embedding-based EA methods have achieved state-of-the-art performance in EA yet faced interpretability challenges as they purely rely on the embedding distance and neglect the logic rules behind a pair of aligned entities. In this paper, we propose the Align-Subgraph Entity Alignment (ASGEA) framework to exploit logic rules from Align-Subgraphs. ASGEA uses anchor links as bridges to construct Align-Subgraphs and spreads along the paths across KGs, which distinguishes it from the embedding-based methods. Furthermore, we design an interpretable Path-based Graph Neural Network, ASGNN, to effectively identify and integrate the logic rules across KGs. We also introduce a node-level multi-modal attention mechanism coupled with multi-modal enriched anchors to augment the Align-Subgraph. Our experimental results demonstrate the superior performance of ASGEA over the existing embedding-based methods in both EA and Multi-Modal EA (MMEA) tasks.