MathCAMPS: Fine-grained Synthesis of Mathematical Problems From Human Curricula

📅 2024-07-01
🏛️ arXiv.org
📈 Citations: 3
Influential: 0
📄 PDF
🤖 AI Summary
Existing evaluations of LLMs’ mathematical reasoning suffer from data contamination, coarse granularity, and poor scalability. Method: We propose a curriculum-standard-driven mathematical problem synthesis framework grounded in 44 fine-grained K–8 learning standards; it employs bidirectional symbolic↔text generation, cycle-consistency verification for quality control, and introduces—firstly—the mathematical conversational follow-up question generation task. Our approach integrates formal grammar modeling, LLM-based conditional generation, symbolic structure parsing, and training trajectory analysis. Contribution/Results: Across 23 LLMs, we observe systematic failure on simple follow-up questions. We precisely characterize the emergence timeline of distinct mathematical capabilities during Pythia-12B’s training. The framework enables low-cost, reproducible, and scalable construction of high-quality, contamination-free evaluation benchmarks with fine-grained capability coverage.

Technology Category

Application Category

📝 Abstract
Mathematical problem solving is an important skill for Large Language Models (LLMs), both as an important capability and a proxy for a range of reasoning abilities. Existing benchmarks probe a diverse set of skills, but they yield aggregate accuracy metrics, obscuring specific abilities or weaknesses. Furthermore, they are difficult to extend with new problems, risking data contamination over time. To address these challenges, we propose MathCAMPS: a method to synthesize high-quality mathematical problems at scale, grounded on 44 fine-grained"standards"from the Mathematics Common Core (CC) Standard for K-8 grades. We encode each standard in a formal grammar, allowing us to sample diverse symbolic problems and their answers. We then use LLMs to realize the symbolic problems into word problems. We propose a cycle-consistency method for validating problem faithfulness. Finally, we derive follow-up questions from symbolic structures and convert them into follow-up word problems - a novel task of mathematical dialogue that probes for robustness in understanding. Experiments on 23 LLMs show surprising failures even in the strongest models (in particular when asked simple follow-up questions). Moreover, we evaluate training checkpoints of Pythia 12B on MathCAMPS, allowing us to analyze when particular mathematical skills develop during its training. Our framework enables the community to reproduce and extend our pipeline for a fraction of the typical cost of building new high-quality datasets.
Problem

Research questions and friction points this paper is trying to address.

Analyzing how mathematical reasoning evolves during LLM training
Investigating curriculum correlation between human-designed and model-learned skills
Identifying which mathematical abilities benefit or suffer from instruction tuning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Analyzing mathematical reasoning learning dynamics
Using synthetic dataset MathCAMPS for evaluation
Correlating skill acquisition with human curriculum order
🔎 Similar Papers
No similar papers found.