π€ AI Summary
This work addresses the challenge of jointly optimizing data and model configurations in large language model training, a task rendered difficult by their high coupling. To this end, we propose JoBS, the first method to enable efficient joint optimization by integrating a scaling lawβinformed performance predictor into Bayesian optimization and leveraging multi-fidelity evaluation to substantially reduce the cost of full-scale training. JoBS not only yields an optimal budget allocation strategy but also consistently outperforms baselines that optimize only data, only model hyperparameters, or existing multi-fidelity Bayesian optimization approaches, achieving superior performance across diverse large language model tasks under identical computational budgets.
π Abstract
Co-optimizing data and model configurations for training LLMs presents a classic chicken-and-egg dilemma: The best training data configuration (e.g., data mixture) for a downstream task depends on the chosen model configuration (e.g., model architecture), and vice versa. However, jointly optimizing both data and model configurations is often deemed intractable, and existing methods focus on either data or model optimization without considering their interaction. We introduce JoBS, an approach that uses a scaling-law-inspired performance predictor to aid Bayesian optimization (BO) in jointly optimizing LLM training data and model configurations efficiently. JoBS allocates a portion of the optimization budget to learn an LLM performance predictor that predicts how promising a training configuration is from a small number of training steps. The remaining budget is used to perform BO entirely with the predictor, effectively amortizing the cost of running full-training runs. We study JoBS's average regret and devise the optimal budget allocation to minimize regret. JoBS outperforms existing multi-fidelity BO baselines, as well as data and model optimization approaches across diverse LLM tasks under the same optimization budget.