🤖 AI Summary
Best-of-N distillation (BOND) suffers from dual bottlenecks in sample and computational efficiency for large language model alignment. This paper establishes a fundamental game-theoretic equivalence between BOND and self-play alignment, enabling the design of WIND—a win-rate advantage-driven framework. WIND improves efficiency via three key mechanisms: regularization on win-rate advantage, parameter-space approximation, and accelerated iterative distillation—while preserving theoretical rigor. Notably, we provide the first provable upper bound on sample complexity for a squared-loss variant of WIND. Empirical results across multiple alignment benchmarks demonstrate that WIND consistently outperforms state-of-the-art BOND and self-play methods using fewer samples and lower computational cost, achieving both strong theoretical guarantees and practical efficacy.
📝 Abstract
Recent advances in aligning large language models with human preferences have corroborated the growing importance of best-of-N distillation (BOND). However, the iterative BOND algorithm is prohibitively expensive in practice due to the sample and computation inefficiency. This paper addresses the problem by revealing a unified game-theoretic connection between iterative BOND and self-play alignment, which unifies seemingly disparate algorithmic paradigms. Based on the connection, we establish a novel framework, WIN rate Dominance (WIND), with a series of efficient algorithms for regularized win rate dominance optimization that approximates iterative BOND in the parameter space. We provides provable sample efficiency guarantee for one of the WIND variant with the square loss objective. The experimental results confirm that our algorithm not only accelerates the computation, but also achieves superior sample efficiency compared to existing methods.