🤖 AI Summary
Large language models (LLMs) face a critical bottleneck in mathematical reasoning due to the scarcity of high-quality, logically rigorous training data.
Method: This paper proposes a purely symbolic, LLM- and proof-assistant-free data generation framework. It applies E-prover to perform saturation-based automated theorem proving over the TPTP axiom library, then employs formal criteria to select “interesting” theorems and construct three controllable-difficulty logical reasoning tasks: entailment verification, premise selection, and proof reconstruction.
Contribution/Results: The resulting dataset is large-scale, logically consistent, and structurally transparent—suitable for both model training and evaluation. Zero-shot evaluation reveals substantial performance degradation of mainstream LLMs on deep structural reasoning tasks, demonstrating the dataset’s strong diagnostic capability and its potential to advance LLMs’ formal reasoning competence.
📝 Abstract
The scarcity of high-quality, logically sound data is a critical bottleneck for advancing the mathematical reasoning of Large Language Models (LLMs). Our work confronts this challenge by turning decades of automated theorem proving research into a scalable data engine. Rather than relying on error-prone LLMs or complex proof-assistant syntax like Lean and Isabelle, our framework leverages E-prover's saturation capabilities on the vast TPTP axiom library to derive a massive, guaranteed-valid corpus of theorems. Our pipeline is principled and simple: saturate axioms, filter for "interesting" theorems, and generate tasks. With no LLMs in the loop, we eliminate factual errors by construction. This purely symbolic data is then transformed into three difficulty-controlled challenges: entailment verification, premise selection, and proof reconstruction. Our zero-shot experiments on frontier models reveal a clear weakness: performance collapses on tasks requiring deep, structural reasoning. Our framework provides both the diagnostic tool to measure this gap and a scalable source of symbolic training data to address it. We make the code and data publicly available.
https://github.com/sileod/reasoning_core https://hf.co/datasets/reasoning-core/rc1