🤖 AI Summary
Neural predicates in neuro-symbolic systems lack declarativity, limiting their interpretability and capacity for logically complete reasoning.
Method: We propose the first fully declarative, general-purpose framework for neural predicates, modeling them as differentiable relational predicates under logic programming semantics. Our approach supports arbitrary query answering with training on only a single query type, integrating differentiable backward-chaining inference, parameterized relational embeddings, and logic-semantics-driven interface reconstruction to overcome limitations of conventional functional modeling.
Results: Experiments demonstrate retention of original learning performance on multi-task logical reasoning benchmarks; achieve, for the first time, zero-shot generalization to unseen query types; attain 100% inference coverage; and simultaneously satisfy end-to-end learnability and first-order logical completeness.
📝 Abstract
Neuro-symbolic systems (NeSy), which claim to combine the best of both learning and reasoning capabilities of artificial intelligence, are missing a core property of reasoning systems: Declarativeness. The lack of declarativeness is caused by the functional nature of neural predicates inherited from neural networks. We propose and implement a general framework for fully declarative neural predicates, which hence extends to fully declarative NeSy frameworks. We first show that the declarative extension preserves the learning and reasoning capabilities while being able to answer arbitrary queries while only being trained on a single query type.