🤖 AI Summary
This work addresses the challenges in post-training configuration of large language models, which include a high-dimensional heterogeneous search space, strong inter-stage coupling, and prohibitive evaluation costs. To tackle these issues, the authors propose AutoPipe, a novel framework that integrates offline historical data learning with online Bayesian optimization. AutoPipe introduces a two-stage, budget-aware configuration selection mechanism that enables cross-dataset transfer guidance and residual modeling. The approach combines a learning-to-rank surrogate, a Gaussian process residual model, and an early-signal-based performance predictor to substantially reduce tuning overhead. Empirical results on biomedical reasoning tasks demonstrate that AutoPipe achieves comparable or superior post-training performance at only 10% of the computational cost required by the strongest existing online methods.
📝 Abstract
LLM post-training pipelines that combine supervised fine-tuning and reinforcement learning are difficult to configure under realistic compute budgets: the configuration space is high-dimensional and heterogeneous, stages are strongly coupled, and each end-to-end evaluation is expensive. We propose AutoPipe, a budget-aware two-stage framework for configuration selection in LLM post-training. Offline, AutoPipe learns a dataset-conditioned learning-to-rank surrogate from historical runs, capturing within-dataset preferences and providing transferable guidance toward promising regions of the configuration space. Online, for a new dataset, AutoPipe uses the offline guidance to steer Bayesian optimization and models dataset-specific deviations with a Gaussian-process residual surrogate. To reduce evaluation cost, each trial is early-stopped and scored by a learned predictor that maps early training signals to a low-cost proxy for final post-training performance. Experiments on biomedical reasoning tasks show that AutoPipe consistently outperforms offline-only baselines and achieves comparable performance with the strongest online HPO baselines while using less than 10\% of their computational cost.