🤖 AI Summary
This work addresses the inefficiency and lack of dynamic optimization in data preparation for large language model (LLM) training. To overcome these limitations, we propose a novel end-to-end, agent-driven paradigm for adaptive data utilization, introducing intelligent agents into the data preparation pipeline for the first time. Our approach enables dynamic data selection, mixing, and reweighting during training through an automated, agent-based workflow, while establishing a unified framework for data-model interaction. By replacing conventional static data processing with this adaptive mechanism, our method significantly reduces manual intervention, enhances training efficiency and robustness, and offers a scalable, reusable, data-centric solution for next-generation LLMs.
📝 Abstract
Large language models (LLMs) have demonstrated remarkable performance across a wide range of tasks and domains, with data playing a central role in enabling these advances. Despite this success, the preparation and effective utilization of the massive datasets required for LLM training remain major bottlenecks. In current practice, LLM training data is often constructed using ad hoc scripts, and there is still a lack of mature, agent-based data preparation systems that can automatically construct robust and reusable data workflows, thereby freeing data scientists from repetitive and error-prone engineering efforts. Moreover, once collected, datasets are often consumed largely in their entirety during training, without systematic mechanisms for data selection, mixture optimization, or reweighting. To address these limitations, we advocate two complementary research directions. First, we propose building a robust, agent-based automatic data preparation system that supports automated workflow construction and scalable data management. Second, we argue for a unified data-model interaction training system in which data is dynamically selected, mixed, and reweighted throughout the training process, enabling more efficient, adaptive, and performance-aware data utilization. Finally, we discuss the remaining challenges and outline promising directions for future research and system development.