Sound and Complete Neuro-symbolic Reasoning with LLM-Grounded Interpretations

📅 2025-07-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) suffer from logical inconsistency in formal reasoning due to their generative nature, undermining reliability in settings involving contradictory or incomplete information. Method: This paper proposes a neuro-symbolic integration framework wherein an LLM serves as a semantic interpretation function for paraconsistent logic—parameterizing its knowledge within a logical semantics that tolerates contradictions. We construct an LLM-grounded interpretation function and embed it into a formally sound and complete logical system. Contribution/Results: To our knowledge, this is the first approach that rigorously couples LLM-knowledge with classical logical semantics while preserving both soundness and completeness of the underlying logic. Empirical evaluation on short factual benchmarks demonstrates that the framework robustly integrates the LLM’s implicit world knowledge without compromising inferential rigor, significantly improving reasoning robustness and factual consistency under inconsistent information scenarios.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) have demonstrated impressive capabilities in natural language understanding and generation, but they exhibit problems with logical consistency in the output they generate. How can we harness LLMs' broad-coverage parametric knowledge in formal reasoning despite their inconsistency? We present a method for directly integrating an LLM into the interpretation function of the formal semantics for a paraconsistent logic. We provide experimental evidence for the feasibility of the method by evaluating the function using datasets created from several short-form factuality benchmarks. Unlike prior work, our method offers a theoretical framework for neuro-symbolic reasoning that leverages an LLM's knowledge while preserving the underlying logic's soundness and completeness properties.
Problem

Research questions and friction points this paper is trying to address.

Addressing logical inconsistency in LLM-generated outputs
Integrating LLMs into formal semantics for paraconsistent logic
Ensuring soundness and completeness in neuro-symbolic reasoning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrates LLM into paraconsistent logic semantics
Ensures soundness and completeness in reasoning
Leverages LLM knowledge for neuro-symbolic frameworks
🔎 Similar Papers
No similar papers found.