On LLM-generated Logic Programs and their Inference Execution Methods

📅 2025-02-11
🏛️ Electronic Proceedings in Theoretical Computer Science
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the challenge of implicit and unverifiable knowledge in large language models (LLMs) by introducing a novel paradigm that structurally encodes LLM outputs as executable logical programs—specifically, propositional Horn clauses, dual Horn clauses, relational triples, and definite clause grammars—to enable sound, verifiable reasoning. Methodologically: (1) it designs a soft unification mechanism to semantically align logical facts with LLM-derived vector representations in embedding databases; (2) it proposes a GPU-accelerated minimal model solving algorithm for efficient, scalable inference over large-scale logical programs. Contributions include: the first end-to-end, verifiable extraction of LLM knowledge into multiple formal logical representations; significant improvements in both the scale and speed of logically sound inference; and empirical validation of enhanced knowledge consistency checking and reasoning capability.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) trained on petabytes of data are highly compressed repositories of a significant proportion of the knowledge accumulated and distilled so far. In this paper we study techniques to elicit this knowledge in the form of several classes of logic programs, including propositional Horn clauses, Dual Horn clauses, relational triplets and Definite Clause Grammars. Exposing this knowledge as logic programs enables sound reasoning methods that can verify alignment of LLM outputs to their intended uses and extend their inference capabilities. We study new execution methods for the generated programs, including soft-unification of abducible facts against LLM-generated content stored in a vector database as well as GPU-based acceleration of minimal model computation that supports inference with large LLM-generated programs.
Problem

Research questions and friction points this paper is trying to address.

Extract logic programs from LLMs
Verify LLM output alignment
Accelerate inference execution methods
Innovation

Methods, ideas, or system contributions that make the work stand out.

Generates logic programs from LLM knowledge
Uses soft-unification for abducible facts
Accelerates inference with GPU computation
🔎 Similar Papers
No similar papers found.