Comparing Few to Rank Many: Active Human Preference Learning using Randomized Frank-Wolfe

📅 2024-12-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the problem of efficiently learning human preference rankings over a large set of $N$ items from sparse $K$-wise comparison feedback, where $K ll N$. For parameter estimation under the Plackett–Luce model, we propose the first active querying framework grounded in D-optimal experimental design. To overcome the prohibitive $Oig(inom{N}{K}ig)$ computational complexity of exhaustive search, we devise a randomized Frank–Wolfe algorithm that yields an approximately optimal, scalable sampling strategy. We establish theoretical guarantees on its convergence and statistical efficiency. Empirical evaluation on synthetic data and open-source NLP datasets demonstrates significant improvements over baselines: 3–5× higher query efficiency and superior model fit accuracy. Our core contributions are (i) the first incorporation of D-optimal design into preference learning, and (ii) the development of an efficient optimization method enabling scalable active learning for large-scale $K$-wise comparisons.

Technology Category

Application Category

📝 Abstract
We study learning of human preferences from a limited comparison feedback. This task is ubiquitous in machine learning. Its applications such as reinforcement learning from human feedback, have been transformational. We formulate this problem as learning a Plackett-Luce model over a universe of $N$ choices from $K$-way comparison feedback, where typically $K ll N$. Our solution is the D-optimal design for the Plackett-Luce objective. The design defines a data logging policy that elicits comparison feedback for a small collection of optimally chosen points from all ${N choose K}$ feasible subsets. The main algorithmic challenge in this work is that even fast methods for solving D-optimal designs would have $O({N choose K})$ time complexity. To address this issue, we propose a randomized Frank-Wolfe (FW) algorithm that solves the linear maximization sub-problems in the FW method on randomly chosen variables. We analyze the algorithm, and evaluate it empirically on synthetic and open-source NLP datasets.
Problem

Research questions and friction points this paper is trying to address.

Preference Learning
Limited Comparison Information
Large-scale Options
Innovation

Methods, ideas, or system contributions that make the work stand out.

Stochastic Frank-Wolfe Algorithm
D-Optimal Design Optimization
Plackett-Luce Model
🔎 Similar Papers
No similar papers found.
K
K. K. Thekumparampil
AWS AI Labs, Amazon
G
G. Hiranandani
Search, Amazon Typeface
K
Kousha Kalantari
AWS AI Labs, Amazon
Shoham Sabach
Shoham Sabach
Associate Professor, Cornell, School of Operations Research and Information Engineering
OptimizationMachine LearningOptimization Algorithms
B
B. Kveton
Adobe Research