đ¤ AI Summary
Existing machine learning datasets inadequately support AI-assisted open-ended research in professional mathematicsâparticularly in algebraic combinatoricsâdue to insufficient scale, structural richness, and formal verifiability.
Method: We introduce the Algebraic Combinatorics Dataset Repository (ACD Repo), the first benchmark suite designed for cutting-edge mathematical research, covering nine unsolved problems with over one million structured, formally verifiable examples per problem. Our approach integrates supervised narrow-model training, model interpretability analysis, large language modelâdriven program synthesis, and symbolic encoding of combinatorial structuresâestablishing a novel âinterpretable modeling + program synthesisâ paradigm for conjecture generation.
Contribution/Results: We release nine high-quality, reproducible datasets; substantially lower the barrier to AI-augmented original mathematical conjecturing; and empirically characterize the limits of neural models in abstract pattern inductionâdemonstrating both their capacity for nontrivial structural generalization and their systematic failures in higher-order combinatorial reasoning.
đ Abstract
With recent dramatic increases in AI system capabilities, there has been growing interest in utilizing machine learning for reasoning-heavy, quantitative tasks, particularly mathematics. While there are many resources capturing mathematics at the high-school, undergraduate, and graduate level, there are far fewer resources available that align with the level of difficulty and open endedness encountered by professional mathematicians working on open problems. To address this, we introduce a new collection of datasets, the Algebraic Combinatorics Dataset Repository (ACD Repo), representing either foundational results or open problems in algebraic combinatorics, a subfield of mathematics that studies discrete structures arising from abstract algebra. Further differentiating our dataset collection is the fact that it aims at the conjecturing process. Each dataset includes an open-ended research-level question and a large collection of examples (up to 10M in some cases) from which conjectures should be generated. We describe all nine datasets, the different ways machine learning models can be applied to them (e.g., training with narrow models followed by interpretability analysis or program synthesis with LLMs), and discuss some of the challenges involved in designing datasets like these.