🤖 AI Summary
This work addresses the fragility of automatic formalization (AF) in logical reasoning, which often suffers from semantic errors or execution failures that existing repair methods struggle to resolve. The authors propose the Draft-and-Prune framework, which leverages large language models during inference to generate diverse natural language plans that guide program synthesis. These candidate formalizations are then pruned by a symbolic solver to eliminate those that, while syntactically executable, exhibit semantic contradictions or ambiguities. Final predictions are aggregated via majority voting over the remaining valid paths. This approach uniquely integrates diverse planning with an unsupervised semantic validation mechanism, substantially enhancing the reliability and accuracy of AF. On the AR-LSAT benchmark, it achieves 78.43% and 78.00% accuracy using GPT-4 and GPT-4o, respectively—significantly outperforming prior methods—and attains perfect 100% accuracy on PrOntoQA and LogicalDeduction.
📝 Abstract
Auto-formalization (AF) translates natural-language reasoning problems into solver-executable programs, enabling symbolic solvers to perform sound logical deduction. In practice, however, AF pipelines are currently brittle: programs may fail to execute, or execute but encode incorrect semantics. While prior work largely mitigates syntactic failures via repairs based on solver feedback, reducing semantics failures remains a major bottleneck. We propose Draft-and-Prune (D&P), an inference-time framework that improves AF-based logical reasoning via diversity and verification. D&P first drafts multiple natural-language plans and conditions program generation on them. It further prunes executable but contradictory or ambiguous formalizations, and aggregates predictions from surviving paths via majority voting. Across four representative benchmarks (AR-LSAT, ProofWriter, PrOntoQA, LogicalDeduction), D&P substantially strengthens AF-based reasoning without extra supervision. On AR-LSAT, in the AF-only setting, D&P achieves 78.43% accuracy with GPT-4 and 78.00% accuracy with GPT-4o, significantly outperforming the strongest AF baselines MAD-LOGIC and CLOVER. D&P then attains near-ceiling performance on the other benchmarks, including 100% on PrOntoQA and LogicalDeduction.