🤖 AI Summary
Neural-symbolic AI has long lacked a unified formal definition, hindering theoretical advancement and systematic evaluation. This paper introduces the first general formal framework encompassing mainstream neural-symbolic systems, modeling neural-symbolic reasoning as the product integral of a logical function and a probabilistic belief function—thereby achieving mathematical unification of symbolic logic representation and neural learning mechanisms. Methodologically, it integrates first-order logic, probabilistic inference, and differentiable computation via formal modeling, coupling deterministic symbolic reasoning with uncertain neural learning through integral operators. The framework rigorously abstracts core architectural components—including symbolic interfaces, neural executors, and belief update mechanisms—while filling a foundational theoretical gap in the field. It provides a principled, formal basis for neural-symbolic model design, consistency verification, and cross-system comparative analysis.
📝 Abstract
Neurosymbolic AI focuses on integrating learning and reasoning, in particular, on unifying logical and neural representations. Despite the existence of an alphabet soup of neurosymbolic AI systems, the field is lacking a generally accepted formal definition of what neurosymbolic models and inference really are. We introduce a formal definition for neurosymbolic AI that makes abstraction of its key ingredients. More specifically, we define neurosymbolic inference as the computation of an integral over a product of a logical and a belief function. We show that our neurosymbolic AI definition makes abstraction of key representative neurosymbolic AI systems.