🤖 AI Summary
Entity alignment (EA) aims to identify equivalent entities across knowledge graphs, yet symbolic models struggle with substructure heterogeneity and sparsity, while neural models lack interpretability and uncertainty quantification. This paper proposes the first neural-symbolic probabilistic framework for EA based on variational inference. It jointly integrates graph neural networks with logical rules, embedding rule constraints into a Markov random field. A variational EM algorithm is designed for efficient posterior inference, and a path-ranking-based interpretability generator is introduced to produce human-understandable, rule-level explanations. Evaluated on standard benchmarks, the method achieves significant improvements over state-of-the-art approaches, delivering higher accuracy, superior robustness to noise, and verifiable, logically grounded explanations.
📝 Abstract
Entity alignment (EA) aims to merge two knowledge graphs (KGs) by identifying equivalent entity pairs. Existing methods can be categorized into symbolic and neural models. Symbolic models, while precise, struggle with substructure heterogeneity and sparsity, whereas neural models, although effective, generally lack interpretability and cannot handle uncertainty. We propose NeuSymEA, a probabilistic neuro-symbolic framework that combines the strengths of both methods. NeuSymEA models the joint probability of all possible pairs' truth scores in a Markov random field, regulated by a set of rules, and optimizes it with the variational EM algorithm. In the E-step, a neural model parameterizes the truth score distributions and infers missing alignments. In the M-step, the rule weights are updated based on the observed and inferred alignments. To facilitate interpretability, we further design a path-ranking-based explainer upon this framework that generates supporting rules for the inferred alignments. Experiments on benchmarks demonstrate that NeuSymEA not only significantly outperforms baselines in terms of effectiveness and robustness, but also provides interpretable results.