🤖 AI Summary
This work addresses the challenge of effectively integrating first-order logic constraints (FOLCs) into neural networks. We propose LogicMP, a plug-and-play neural layer that for the first time embeds Markov logic network (MLN) structure and logical symmetry into a differentiable neural architecture. Leveraging structure-aware mean-field variational inference, LogicMP transforms sequential symbolic reasoning into parallel tensor operations. The method achieves a favorable balance among expressivity, modularity, and training efficiency. Empirically, it substantially outperforms state-of-the-art baselines across graph, image, and text domains: inference speed increases by multiple-fold while accuracy simultaneously improves. LogicMP establishes a new, efficient, and general paradigm for encoding logical constraints in neuro-symbolic integration.
📝 Abstract
Integrating first-order logic constraints (FOLCs) with neural networks is a crucial but challenging problem since it involves modeling intricate correlations to satisfy the constraints. This paper proposes a novel neural layer, LogicMP, whose layers perform mean-field variational inference over an MLN. It can be plugged into any off-the-shelf neural network to encode FOLCs while retaining modularity and efficiency. By exploiting the structure and symmetries in MLNs, we theoretically demonstrate that our well-designed, efficient mean-field iterations effectively mitigate the difficulty of MLN inference, reducing the inference from sequential calculation to a series of parallel tensor operations. Empirical results in three kinds of tasks over graphs, images, and text show that LogicMP outperforms advanced competitors in both performance and efficiency.