Active Human Feedback Collection via Neural Contextual Dueling Bandits

📅 2025-04-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing preference feedback collection methods often assume a linear underlying reward function—a restrictive assumption that frequently fails in practical settings such as recommender systems and large language model alignment. To address this, we propose the Neural Contextual Dueling Bandits framework, the first to integrate neural networks into the dueling bandit paradigm, thereby relaxing the linearity assumption and enabling efficient learning of nonlinear latent reward functions from pairwise preferences. Our approach unifies the Bradley–Terry–Luce (BTL) preference model with neural representation learning, and we provide theoretical guarantees showing that its cumulative suboptimality gap converges at a sublinear rate. Empirical evaluation on synthetic benchmarks demonstrates significant improvements over linear baselines. This work establishes a novel, theoretically grounded, and practically viable paradigm for nonlinear preference modeling.

Technology Category

Application Category

📝 Abstract
Collecting human preference feedback is often expensive, leading recent works to develop principled algorithms to select them more efficiently. However, these works assume that the underlying reward function is linear, an assumption that does not hold in many real-life applications, such as online recommendation and LLM alignment. To address this limitation, we propose Neural-ADB, an algorithm based on the neural contextual dueling bandit framework that provides a principled and practical method for collecting human preference feedback when the underlying latent reward function is non-linear. We theoretically show that when preference feedback follows the Bradley-Terry-Luce model, the worst sub-optimality gap of the policy learned by Neural-ADB decreases at a sub-linear rate as the preference dataset increases. Our experimental results on problem instances derived from synthetic preference datasets further validate the effectiveness of Neural-ADB.
Problem

Research questions and friction points this paper is trying to address.

Efficient human preference feedback collection for non-linear rewards
Overcoming linear reward assumption in real-world applications
Theoretical and practical validation of Neural-ADB algorithm
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses neural contextual dueling bandits
Handles non-linear reward functions
Sub-linear policy sub-optimality gap
🔎 Similar Papers
No similar papers found.