VeriCoT: Neuro-symbolic Chain-of-Thought Validation via Logical Consistency Checks

📅 2025-11-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Chain-of-thought (CoT) reasoning in large language models (LLMs) frequently contains latent logical flaws—even when final answers are correct—undermining reliability in high-stakes applications. Method: We propose the first neuro-symbolic framework for automatic extraction and formal verification of CoT logical structure: natural-language reasoning steps are translated into first-order logic expressions, aligned with domain-specific knowledge graphs, and validated stepwise via automated theorem proving; verification signals further drive self-reflection and direct preference optimization (DPO), enabling supervised fine-tuning and trust-guided reasoning correction. Contribution/Results: Evaluated on ProofWriter, LegalBench, and BioASQ, our method robustly identifies erroneous reasoning paths, improves predictive accuracy of answer correctness, and simultaneously enhances both reasoning accuracy and logical consistency—demonstrating the first end-to-end, verifiable CoT pipeline.

Technology Category

Application Category

📝 Abstract
LLMs can perform multi-step reasoning through Chain-of-Thought (CoT), but they cannot reliably verify their own logic. Even when they reach correct answers, the underlying reasoning may be flawed, undermining trust in high-stakes scenarios. To mitigate this issue, we introduce VeriCoT, a neuro-symbolic method that extracts and verifies formal logical arguments from CoT reasoning. VeriCoT formalizes each CoT reasoning step into first-order logic and identifies premises that ground the argument in source context, commonsense knowledge, or prior reasoning steps. The symbolic representation enables automated solvers to verify logical validity while the NL premises allow humans and systems to identify ungrounded or fallacious reasoning steps. Experiments on the ProofWriter, LegalBench, and BioASQ datasets show VeriCoT effectively identifies flawed reasoning, and serves as a strong predictor of final answer correctness. We also leverage VeriCoT's verification signal for (1) inference-time self-reflection, (2) supervised fine-tuning (SFT) on VeriCoT-distilled datasets and (3) preference fine-tuning (PFT) with direct preference optimization (DPO) using verification-based pairwise rewards, further improving reasoning validity and accuracy.
Problem

Research questions and friction points this paper is trying to address.

Verifying logical consistency in Chain-of-Thought reasoning by large language models
Identifying flawed reasoning steps through neuro-symbolic formalization and validation
Improving reasoning trustworthiness in high-stakes applications via automated verification
Innovation

Methods, ideas, or system contributions that make the work stand out.

Extracts formal logical arguments from Chain-of-Thought reasoning
Verifies logical consistency using automated symbolic solvers
Uses verification signal for fine-tuning and self-reflection
🔎 Similar Papers
No similar papers found.
Y
Yu Feng
University of Pennsylvania
Nathaniel Weir
Nathaniel Weir
Johns Hopkins University
Natural Language ProcessingArtificial IntelligenceLinguistics
K
Kaj Bostrom
Amazon Web Services
S
Sam Bayless
Amazon Web Services
D
Darion Cassel
Amazon Web Services
Sapana Chaudhary
Sapana Chaudhary
AWS AI
Reinforcement LearningPost-TrainingOnline Optimization
B
Benjamin Kiesl-Reiter
Amazon Web Services
H
H. Rangwala
Amazon Web Services