Grounding Methods for Neural-Symbolic AI

📅 2025-07-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In neural-symbolic AI, logical grounding faces a fundamental trade-off between expressive power and computational scalability: exhaustive instantiation causes combinatorial explosion, while heuristic selective derivation lacks theoretical guarantees and risks information loss. This paper introduces a class of parameterized logical grounding methods that generalize traditional backward chaining into a continuously tunable, unified framework, enabling controlled trade-offs between expressivity and efficiency. Our approach integrates first-order logic reasoning, multi-hop symbolic inference, and machine learning–driven entity-aware grounding to realize semantics-guided selective instantiation. Experiments demonstrate that our grounding criteria significantly outperform both exhaustive and heuristic baselines—achieving higher inference efficiency while preserving high information fidelity. The proposed framework establishes a new paradigm for neural-symbolic systems, combining theoretical rigor with practical scalability.

Technology Category

Application Category

📝 Abstract
A large class of Neural-Symbolic (NeSy) methods employs a machine learner to process the input entities, while relying on a reasoner based on First-Order Logic to represent and process more complex relationships among the entities. A fundamental role for these methods is played by the process of logic grounding, which determines the relevant substitutions for the logic rules using a (sub)set of entities. Some NeSy methods use an exhaustive derivation of all possible substitutions, preserving the full expressive power of the logic knowledge. This leads to a combinatorial explosion in the number of ground formulas to consider and, therefore, strongly limits their scalability. Other methods rely on heuristic-based selective derivations, which are generally more computationally efficient, but lack a justification and provide no guarantees of preserving the information provided to and returned by the reasoner. Taking inspiration from multi-hop symbolic reasoning, this paper proposes a parametrized family of grounding methods generalizing classic Backward Chaining. Different selections within this family allow us to obtain commonly employed grounding methods as special cases, and to control the trade-off between expressiveness and scalability of the reasoner. The experimental results show that the selection of the grounding criterion is often as important as the NeSy method itself.
Problem

Research questions and friction points this paper is trying to address.

Addressing combinatorial explosion in logic grounding for Neural-Symbolic AI
Balancing expressiveness and scalability in grounding methods
Providing justified and guaranteed substitutions for logic rules
Innovation

Methods, ideas, or system contributions that make the work stand out.

Parametrized family of grounding methods
Generalizes classic Backward Chaining
Balances expressiveness and scalability
🔎 Similar Papers
No similar papers found.