🤖 AI Summary
This study investigates whether uncertainty-driven active preference learning (APL) in online DPO can substantially outperform random sampling under strong pretrained priors. We systematically compare APL and random sampling across dimensions including harmlessness, helpfulness, and instruction following, evaluating performance using both reward models and LLM-as-a-judge metrics. Our experiments reveal that APL yields only marginal improvements in proxy win rates, fails to meaningfully mitigate capability degradation or reduce variance, and incurs computational overhead that is difficult to justify. Notably, we uncover a decoupling between gains in preference win rates and declines in general capabilities—a phenomenon previously unreported—suggesting that, within the current paradigm, the “cheap diversity” afforded by random sampling offers superior cost-effectiveness.
📝 Abstract
Modern LLMs inherit strong priors from web-scale pretraining, which can limit the headroom of post-training data-selection strategies. While Active Preference Learning (APL) seeks to optimize query efficiency in online Direct Preference Optimization (DPO), the inherent richness of on-policy candidate pools often renders simple Random sampling a surprisingly formidable baseline. We evaluate uncertainty-based APL against Random across harmlessness, helpfulness, and instruction-following settings, utilizing both reward models and LLM-as-a-judge proxies. We find that APL yields negligible improvements in proxy win-rates compared to Random. Crucially, we observe a dissociation where win-rate improves even as general capability -- measured by standard benchmarks -- degrades. APL fails to mitigate this capability collapse or reduce variance significantly better than random sampling. Our findings suggest that in the regime of strong pre-trained priors, the computational overhead of active selection is difficult to justify against the ``cheap diversity'' provided by simple random samples. Our code is available at https://github.com/BootsofLagrangian/random-vs-apl.