๐ค AI Summary
This work addresses the limitations of large language models (LLMs) in reliably integrating textual understanding with logical reasoning, as well as the narrow applicability of existing neuro-symbolic systemsโwhich are confined to fully formalized tasks and struggle with natural language documents containing partial logical structure. To bridge this gap, the authors propose Logitext, a neuro-symbolic language grounded in Natural Language Textual Constraints (NLTCs). Logitext uniquely embeds LLM-based reasoning within a satisfiability modulo theories (SMT) framework, enabling the LLM to function as an integral theory within the SMT solver for joint inference. This approach substantially extends the applicability of neuro-symbolic systems to partially formalized settings, achieving notable improvements in both reasoning accuracy and coverage across diverse benchmarks, including a new content moderation benchmark, LegalBench, and Super-Natural Instructions.
๐ Abstract
Natural language understanding requires interleaving textual and logical reasoning, yet large language models often fail to perform such reasoning reliably. Existing neurosymbolic systems combine LLMs with solvers but remain limited to fully formalizable tasks such as math or program synthesis, leaving natural documents with only partial logical structure unaddressed. We introduce Logitext, a neurosymbolic language that represents documents as natural language text constraints (NLTCs), making partial logical structure explicit. We develop an algorithm that integrates LLM-based constraint evaluation with satisfiability modulo theory (SMT) solving, enabling joint textual-logical reasoning. Experiments on a new content moderation benchmark, together with LegalBench and Super-Natural Instructions, show that Logitext improves both accuracy and coverage. This work is the first that treats LLM-based reasoning as an SMT theory, extending neurosymbolic methods beyond fully formalizable domains.