🤖 AI Summary
To address the challenge of balancing test-time performance and inference efficiency in instruction-following large language models (LLMs), this paper proposes a lightweight test-time optimization framework. It integrates Minimum Bayes Risk (MBR) candidate re-ranking decoding with a small-parameter LLM judge (as small as 1.5B) for quality assessment of outputs from a 70B model, coupled with iterative self-training via Direct Preference Optimization (DPO). The key contribution is the first systematic validation that small judges can effectively supervise ultra-large models, establishing a synergistic paradigm between MBR decoding and DPO self-training—eliminating test-time overhead while permanently embedding performance gains. Experiments demonstrate significant improvements over greedy decoding, Best-of-N, and existing MBR baselines on AlpacaEval and MT-Bench. After self-training, the optimized model achieves superior performance using only greedy decoding—outperforming the original 70B model’s MBR results.
📝 Abstract
General-purpose LLM judges capable of human-level evaluation provide not only a scalable and accurate way of evaluating instruction-following LLMs but also new avenues for supervising and improving their performance. One promising way of leveraging LLM judges for supervision is through Minimum Bayes Risk (MBR) decoding, which uses a reference-based evaluator to select a high-quality output from amongst a set of candidate outputs. In the first part of this work, we explore using MBR decoding as a method for improving the test-time performance of instruction-following LLMs. We find that MBR decoding with reference-based LLM judges substantially improves over greedy decoding, best-of-N decoding with reference-free judges and MBR decoding with lexical and embedding-based metrics on AlpacaEval and MT-Bench. These gains are consistent across LLMs with up to 70B parameters, demonstrating that smaller LLM judges can be used to supervise much larger LLMs. Then, seeking to retain the improvements from MBR decoding while mitigating additional test-time costs, we explore iterative self-training on MBR-decoded outputs. We find that self-training using Direct Preference Optimisation leads to significant performance gains, such that the self-trained models with greedy decoding generally match and sometimes exceed the performance of their base models with MBR decoding.