🤖 AI Summary
Optimizing data mixture ratios for unknown downstream tasks remains challenging due to the absence of task-specific supervision during pretraining.
Method: This paper proposes DUET, the first algorithm that couples Bayesian optimization with online data selection, dynamically adapting cross-domain training data mixture ratios based on coarse-grained feedback from real downstream evaluations.
Contribution/Results: DUET provides theoretical guarantees—namely, convergence and an upper bound on cumulative regret—by integrating online feedback-driven adaptation, cross-domain mixture modeling, and rigorous regret analysis. Empirically, on image classification and large language model evaluation benchmarks, DUET significantly outperforms static mixture baselines, rapidly converging to high-performing data compositions and substantially enhancing cross-task generalization capability.
📝 Abstract
The performance of a machine learning (ML) model depends heavily on the relevance of its training data to the domain of the downstream evaluation task. However, in practice, the data involved in an unseen evaluation task is often not known to us (e.g., conversations between an LLM and a user are end-to-end encrypted). So, it is not obvious what data would be relevant for training/fine-tuning the ML model to maximize its task performance. Instead, one can only deploy the ML model in the unseen evaluation task to gather multiple rounds of coarse feedback on how well the model has performed. This paper presents a novel global-to-local algorithm called DUET that can exploit the feedback loop by interleaving a data selection method with Bayesian optimization. As a result, DUET can efficiently refine the training data mixture from a pool of data domains to maximize the model's performance on the unseen evaluation task and its convergence to the optimal data mixture can be theoretically guaranteed by analyzing its cumulative regret. Empirical evaluation on image and LLM evaluation tasks shows that DUET finds better training data mixtures than conventional baselines.