🤖 AI Summary
Existing dynamic data selection methods rely on task-specific handcrafted metrics or static criteria, limiting their adaptability to the evolving utility of data during training and their generalization across tasks. This work proposes a training-aware dynamic data selection framework that formulates sample selection as a sequential decision-making problem co-evolving with model training. An end-to-end reinforcement learning agent is developed to intelligently select data, guided by a composite reward function that integrates loss-based difficulty and uncertainty signals derived from prediction confidence. A parameter-free adaptive reward weighting mechanism is introduced to balance these signals without manual tuning. The approach achieves over 50% reduction in training cost on benchmarks such as ImageNet-1k and MMLU without performance degradation, and demonstrates strong robustness under noisy conditions and plug-and-play generalization across diverse tasks.
📝 Abstract
Dynamic Data selection aims to accelerate training by prioritizing informative samples during online training. However, existing methods typically rely on task-specific handcrafted metrics or static/snapshot-based criteria to estimate sample importance, limiting scalability across learning paradigms and making it difficult to capture the evolving utility of data throughout training. To address this challenge, we propose Data Agent, an end-to-end dynamic data selection framework that formulates data selection as a training-aware sequential decision-making problem. The agent learns a sample-wise selection policy that co-evolves with model optimization, guided by a composite reward that integrates loss-based difficulty and confidence-based uncertainty signals. The reward signals capture complementary objectives of optimization impact and information gain, together with a tuning-free adaptive weighting mechanism that balances these signals over training. Extensive experiments across a wide range of datasets and architectures demonstrate that Data Agent consistently accelerates training while preserving or improving performance, e.g., reducing costs by over 50\% on ImageNet-1k and MMLU with lossless performance. Moreover, its dataset-agnostic formulation and modular reward make it plug-and-play across tasks and scenarios, e.g., robustness to noisy datasets, highlighting its potential in real-world scenarios.