🤖 AI Summary
This work addresses the high cost and limited generalizability of manually designed data processing pipelines—referred to as “data recipes”—for adapting large language models (LLMs). It formulates data recipe construction as an end-to-end reinforcement learning problem and introduces an online reinforcement learning framework that leverages a proxy reward model to predict downstream task performance. This framework enables the 32B-parameter model, DataChef-32B, to autonomously generate high-performing data recipes without human intervention. Evaluated on six unseen tasks, the method matches or exceeds human expert performance; notably, the recipe it generates for Qwen3-1.7B-Base in the mathematical domain achieves a score of 66.7 on the AIME’25 benchmark, substantially outperforming the original model and advancing the development of self-evolving AI systems.
📝 Abstract
In the current landscape of Large Language Models (LLMs), the curation of large-scale, high-quality training data is a primary driver of model performance. A key lever is the \emph{data recipe}, which comprises a data processing pipeline to transform raw sources into training corpora. Despite the growing use of LLMs to automate individual data processing steps, such as data synthesis and filtering, the overall design of data recipes remains largely manual and labor-intensive, requiring substantial human expertise and iteration. To bridge this gap, we formulate \emph{end-to-end data recipe generation} for LLM adaptation. Given a target benchmark and a pool of available data sources, a model is required to output a complete data recipe that adapts a base LLM to the target task. We present DataChef-32B, which performs online reinforcement learning using a proxy reward that predicts downstream performance for candidate recipes. Across six held-out tasks, DataChef-32B produces practical recipes that reach comparable downstream performance to those curated by human experts. Notably, the recipe from DataChef-32B adapts Qwen3-1.7B-Base to the math domain, achieving 66.7 on AIME'25 and surpassing Qwen3-1.7B. This work sheds new light on automating LLM training and developing self-evolving AI systems.