🤖 AI Summary
To address computational resource waste in test-time alignment of large language models (LLMs), this paper proposes a two-stage, fine-tuning-free method that adaptively allocates Best-of-N sampling budgets based on prompt difficulty. First, lightweight exploration estimates the reward distribution for each prompt; then, sampling budgets are dynamically allocated per prompt to achieve prompt-level adaptive optimization. This plug-and-play framework balances efficiency and alignment quality. Experiments across 12 LM/RM combinations and 50 prompt batches on AlpacaEval show: (i) significant superiority over uniform budget allocation under identical computational budgets; (ii) achievement of the performance of uniform allocation with +20% budget using only 80% of that budget; and (iii) consistently increasing gains as batch size grows. The core contribution is the first prompt-level adaptive budget allocation mechanism, which uniquely couples reward distribution estimation with sampling resource scheduling within the test-time alignment pipeline.
📝 Abstract
Recent advances in test-time alignment methods, such as Best-of-N sampling, offer a simple and effective way to steer language models (LMs) toward preferred behaviors using reward models (RM). However, these approaches can be computationally expensive, especially when applied uniformly across prompts without accounting for differences in alignment difficulty. In this work, we propose a prompt-adaptive strategy for Best-of-N alignment that allocates inference-time compute more efficiently. Motivated by latency concerns, we develop a two-stage algorithm: an initial exploratory phase estimates the reward distribution for each prompt using a small exploration budget, and a second stage adaptively allocates the remaining budget using these estimates. Our method is simple, practical, and compatible with any LM/RM combination. Empirical results on the AlpacaEval dataset for 12 LM/RM pairs and 50 different batches of prompts show that our adaptive strategy consistently outperforms the uniform allocation with the same inference budget. Moreover, our experiments show that our adaptive strategy remains competitive against uniform allocations with 20% larger inference budgets and even improves in performance as the batch size grows.