Delta1 with LLM: symbolic and neural integration for credible and explainable reasoning

📅 2026-03-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work proposes a neuro-symbolic reasoning framework that integrates the rigor of symbolic logic with the explainability of large language models (LLMs) to build trustworthy and auditable systems. It introduces a novel integration of Delta1—an automatic theorem generator based on Full Triangular Standard Contradiction (FTSC)—with an LLM, where Delta1 deterministically produces minimal unsatisfiable clause sets and complete theorems in polynomial time, and the LLM translates formal proofs into natural language explanations. This synergy enables an end-to-end “explain-as-you-construct” reasoning paradigm, guaranteeing correctness, minimality, and interpretability. The framework’s auditability and domain alignment have been validated in high-stakes domains such as healthcare and regulatory compliance, advancing the deep integration of logic, language, and learning.

Technology Category

Application Category

📝 Abstract
Neuro-symbolic reasoning increasingly demands frameworks that unite the formal rigor of logic with the interpretability of large language models (LLMs). We introduce an end to end explainability by construction pipeline integrating the Automated Theorem Generator Delta1 based on the full triangular standard contradiction (FTSC) with LLMs. Delta1 deterministically constructs minimal unsatisfiable clause sets and complete theorems in polynomial time, ensuring both soundness and minimality by construction. The LLM layer verbalizes each theorem and proof trace into coherent natural language explanations and actionable insights. Empirical studies across health care, compliance, and regulatory domains show that Delta1 and LLM enables interpretable, auditable, and domain aligned reasoning. This work advances the convergence of logic, language, and learning, positioning constructive theorem generation as a principled foundation for neuro-symbolic explainable AI.
Problem

Research questions and friction points this paper is trying to address.

neuro-symbolic reasoning
explainable AI
logical rigor
interpretability
theorem generation
Innovation

Methods, ideas, or system contributions that make the work stand out.

neuro-symbolic reasoning
automated theorem generation
explainable AI
large language models
constructive reasoning
🔎 Similar Papers
Y
Yang Xu
School of Mathematics, Southwest Jiaotong University, Chengdu 611756, China; National–Local Joint Engineering Laboratory of System Credibility Automatic Verification, Southwest Jiaotong University, Chengdu 611756, Sichuan, China
Jun Liu
Jun Liu
Ulster University
Artificial Intelligencedecision sciencelogicrisk assessmentcomputing
S
Shuwei Chen
School of Mathematics, Southwest Jiaotong University, Chengdu 611756, China; National–Local Joint Engineering Laboratory of System Credibility Automatic Verification, Southwest Jiaotong University, Chengdu 611756, Sichuan, China
Chris Nugent
Chris Nugent
Ulster University
Ambient Assisted LivingSmart HomesSmart EnvironmentsActivity Recognition
H
Hailing Guo
School of Mathematics, Southwest Jiaotong University, Chengdu 611756, China; National–Local Joint Engineering Laboratory of System Credibility Automatic Verification, Southwest Jiaotong University, Chengdu 611756, Sichuan, China