SATBench: Benchmarking LLMs' Logical Reasoning via Automated Puzzle Generation from SAT Formulas

📅 2025-05-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the lack of rigorous evaluation benchmarks for large language models (LLMs) on search-oriented logical reasoning—particularly Boolean satisfiability (SAT)-based reasoning. To this end, we introduce SATBench, the first fully automated, scalable, and difficulty-controllable SAT benchmark. Its core innovation lies in end-to-end generation of narrative-style logic puzzles directly from SAT formulas, integrating LLM-based prompt engineering, SAT solver verification, and human validation to ensure correctness and fidelity. Comprehensive evaluation across 2,100 puzzles reveals critical limitations: even the strongest current model, o4-mini, achieves only 65.0% accuracy on hard UNSAT instances—approaching the 50% random baseline. These results expose fundamental weaknesses in LLMs’ capacity for deep combinatorial search and formal constraint reasoning, underscoring the need for improved architectures and evaluation frameworks in deductive reasoning tasks.

Technology Category

Application Category

📝 Abstract
We introduce SATBench, a benchmark for evaluating the logical reasoning capabilities of large language models (LLMs) through logical puzzles derived from Boolean satisfiability (SAT) problems. Unlike prior work that focuses on inference rule-based reasoning, which often involves deducing conclusions from a set of premises, our approach leverages the search-based nature of SAT problems, where the objective is to find a solution that fulfills a specified set of logical constraints. Each instance in SATBench is generated from a SAT formula, then translated into a story context and conditions using LLMs. The generation process is fully automated and allows for adjustable difficulty by varying the number of clauses. All 2100 puzzles are validated through both LLM-assisted and solver-based consistency checks, with human validation on a subset. Experimental results show that even the strongest model, o4-mini, achieves only 65.0% accuracy on hard UNSAT problems, close to the random baseline of 50%. SATBench exposes fundamental limitations in the search-based logical reasoning abilities of current LLMs and provides a scalable testbed for future research in logical reasoning.
Problem

Research questions and friction points this paper is trying to address.

Evaluating LLMs' logical reasoning via SAT-derived puzzles
Automating puzzle generation with adjustable difficulty levels
Exposing limitations in LLMs' search-based logical reasoning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Automated puzzle generation from SAT formulas
Adjustable difficulty via clause variation
LLM-assisted and solver-based validation
🔎 Similar Papers
No similar papers found.