DéjàQ: Open-Ended Evolution of Diverse, Learnable and Verifiable Problems

📅 2026-01-05
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Static datasets often encourage models to memorize rather than generalize, thereby limiting the sustained improvement of mathematical reasoning capabilities. To address this, this work proposes DéjàQ, a framework that leverages large language models to dynamically generate diverse synthetic math problems during training through two mutation strategies: autonomously altering either the problem context or its structural formulation. This enables co-evolution between problem difficulty and model proficiency. Integrated with reinforcement learning, DéjàQ incorporates mechanisms for validating problem validity, dynamically adjusting difficulty, and controlling computational overhead, ensuring the generated problems are both novel and learnable. Experiments demonstrate that DéjàQ substantially enhances model performance on mathematical reasoning tasks, confirming the effectiveness of dynamically evolving data in improving generalization.

Technology Category

Application Category

📝 Abstract
Recent advances in reasoning models have yielded impressive results in mathematics and coding. However, most approaches rely on static datasets, which have been suggested to encourage memorisation and limit generalisation. We introduce D\'ej\`aQ, a framework that departs from this paradigm by jointly evolving a diverse set of synthetic mathematical problems alongside model training. This evolutionary process adapts to the model's ability throughout training, optimising problems for learnability. We propose two LLM-driven mutation strategies in which the model itself mutates the training data, either by altering contextual details or by directly modifying problem structure. We find that the model can generate novel and meaningful problems, and that these LLM-driven mutations improve RL training. We analyse key aspects of D\'ej\`aQ, including the validity of generated problems and computational overhead. Our results underscore the potential of dynamically evolving training data to enhance mathematical reasoning and indicate broader applicability, which we will support by open-sourcing our code.
Problem

Research questions and friction points this paper is trying to address.

static datasets
memorisation
generalisation
mathematical reasoning
training data
Innovation

Methods, ideas, or system contributions that make the work stand out.

open-ended evolution
learnable problems
LLM-driven mutation
dynamic training data
mathematical reasoning
🔎 Similar Papers
No similar papers found.