Reasoning Core: A Scalable Procedural Data Generation Suite for Symbolic Pre-training and Post-Training

📅 2026-03-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing approaches to symbolic reasoning data generation rely on fixed templates, limiting diversity and scalability of training signals. This work proposes the first unified, multi-domain procedural data generation framework that encompasses core formal reasoning tasks—including PDDL planning, first-order logic, context-free grammars, causal reasoning, and systems of equations—while supporting controllable difficulty levels, solver-based verification, and complete reasoning trace generation. The framework enables early integration of supervised signals during pretraining and provides verifiable rewards for reinforcement learning. Experiments demonstrate that pretraining with a mixture of generated data substantially enhances downstream reasoning capabilities without compromising language modeling performance. Zero-shot evaluations further reveal that these tasks remain challenging even for state-of-the-art models such as GPT-5.

Technology Category

Application Category

📝 Abstract
Training on verifiable symbolic data is a promising way to expand the reasoning frontier of language models beyond what standard pre-training corpora provide. Yet existing procedural generators often rely on fixed puzzles or templates and do not deliver the distributional breadth needed at scale. We introduce Reasoning Core, a scalable suite that procedurally generates verifiable symbolic reasoning data across core formal domains: PDDL planning over randomized domains, first-order logic with equality, context-free grammar parsing and generation, causal reasoning over random Bayesian networks, and systems of equations. Each task is paired with an external solver for rigorous verification and admits continuous difficulty control for curriculum design. Examples can optionally include solver-derived reasoning traces, enabling supervised training from the earliest pre-training stages, and the same interface provides verifiable reward functions for reinforcement learning. Our experiments show that mixing Reasoning Core data into pre-training improves downstream reasoning while preserving, or slightly improving, language modeling quality. Zero-shot evaluations confirm these tasks challenge frontier models such as GPT-5. The code and data are publicly available under the MIT license.
Problem

Research questions and friction points this paper is trying to address.

symbolic reasoning
procedural data generation
language model pre-training
scalable reasoning data
verifiable symbolic data
Innovation

Methods, ideas, or system contributions that make the work stand out.

procedural data generation
symbolic reasoning
verifiable training
curriculum learning
reasoning traces
🔎 Similar Papers
No similar papers found.
V
Valentin Lacombe
Univ. Lille, Inria, CNRS, Centrale Lille, UMR 9189 - CRIStAL, F-59000 Lille, France
V
Valentin Quesnel
Univ. Lille, Inria, CNRS, Centrale Lille, UMR 9189 - CRIStAL, F-59000 Lille, France
Damien Sileo
Damien Sileo
Inria
Natural Language ProcessingReasoningDatasetsLLMsSynthetic data