A Neurosymbolic Approach to Natural Language Formalization and Verification

📅 2025-11-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In highly regulated domains such as finance and healthcare, the inherent stochasticity of large language models (LLMs) impedes their compliant deployment, necessitating formalization of natural language policies and rigorous verification of logical correctness. This paper proposes a two-stage neuro-symbolic framework: first, LLM-driven *runtime automatic formalization*, augmented by human-in-the-loop guidance; second, cross-verification via multiple independent formalizations and semantic equivalence checking to ensure logical consistency. The approach substantially reduces false positives and generates auditable, traceable logical evidence chains. Evaluated on benchmark policy datasets, it achieves over 99% reliability—marking the first demonstration of high-accuracy, traceable, and formally verifiable automated compliance assessment for natural language policies. This work establishes a trustworthy AI pathway for high-stakes operational environments.

Technology Category

Application Category

📝 Abstract
Large Language Models perform well at natural language interpretation and reasoning, but their inherent stochasticity limits their adoption in regulated industries like finance and healthcare that operate under strict policies. To address this limitation, we present a two-stage neurosymbolic framework that (1) uses LLMs with optional human guidance to formalize natural language policies, allowing fine-grained control of the formalization process, and (2) uses inference-time autoformalization to validate logical correctness of natural language statements against those policies. When correctness is paramount, we perform multiple redundant formalization steps at inference time, cross checking the formalizations for semantic equivalence. Our benchmarks demonstrate that our approach exceeds 99% soundness, indicating a near-zero false positive rate in identifying logical validity. Our approach produces auditable logical artifacts that substantiate the verification outcomes and can be used to improve the original text.
Problem

Research questions and friction points this paper is trying to address.

Addressing LLM stochasticity for regulated industry policy compliance
Formalizing natural language policies with human-guided control
Verifying logical correctness through auditable autoformalization processes
Innovation

Methods, ideas, or system contributions that make the work stand out.

Neurosymbolic framework formalizes policies with human guidance
Autoformalization validates logical correctness of statements
Multiple redundant formalizations ensure semantic equivalence checking
🔎 Similar Papers
No similar papers found.
S
Sam Bayless
Amazon Web Services
S
Stefano Buliani
Amazon Web Services
D
Darion Cassel
Amazon Web Services
Byron Cook
Byron Cook
Amazon Web Services, University College London
D
Duncan Clough
Amazon Web Services
Rémi Delmas
Rémi Delmas
Amazon Web Services
N
Nafi Diallo
Amazon Web Services
Ferhat Erata
Ferhat Erata
Yale University
Neuro-Symbolic AIAutomated ReasoningAlignmentSecurity & Privacy
Nick Feng
Nick Feng
University of Toronto
Software EngineeringVerification
D
D. Giannakopoulou
Amazon Web Services
Aman Goel
Aman Goel
Applied Scientist, Amazon Web Services
Generative AITrustworthy AIAutomated ReasoningDistributed Systems
A
Aditya Gokhale
Amazon Web Services
Joe Hendrix
Joe Hendrix
Amazon Web Services
M
Marc Hudak
Amazon Web Services
Dejan Jovanović
Dejan Jovanović
Amazon Web Services
Andrew M. Kent
Andrew M. Kent
Amazon Web Services
B
Benjamin Kiesl-Reiter
Amazon Web Services
J
Jeffrey J. Kuna
Amazon Web Services
N
Nadia Labai
Amazon Web Services
J
Joseph Lilien
Amazon Web Services
D
Divya Raghunathan
Amazon Web Services
Z
Zvonimir Rakamarić
Amazon Web Services
N
Niloofar Razavi
Amazon Web Services
Michael Tautschnig
Michael Tautschnig
Amazon Web Services
Ali Torkamani
Ali Torkamani
University of Oregon, Cambia Health Solutions, Amazon Web Services
Machine LearningArtificial IntelligenceComputer Vision
Nathaniel Weir
Nathaniel Weir
Johns Hopkins University
Natural Language ProcessingArtificial IntelligenceLinguistics
M
Michael W. Whalen
Amazon Web Services
J
Jianan Yao
University of Toronto