🤖 AI Summary
This paper studies the contextual dueling bandits problem under adversarial preference feedback, aiming to enhance the robust alignment of generative models—particularly large language models—against malicious human manipulation. To address preference falsification by adversaries seeking to corrupt model outputs, we propose the Robust Contextual Dueling Bandits (RCDB) algorithm, the first to achieve near-minimax-optimal regret in adversarial settings. We introduce a derivative-aware maximum likelihood estimator that eliminates implicit dependence on the link function’s curvature parameter κ and reduces the exponential dependence on the parameter radius B to polynomial dependence. Our analysis yields a near-optimal regret bound of Õ(d√T/κ + dC/κ), which matches the information-theoretic lower bound. The method significantly improves practicality and scalability under the sigmoid link function.
📝 Abstract
Learning from human feedback plays an important role in aligning generative models, such as large language models (LLM). However, the effectiveness of this approach can be influenced by adversaries, who may intentionally provide misleading preferences to manipulate the output in an undesirable or harmful direction. To tackle this challenge, we study a specific model within this problem domain--contextual dueling bandits with adversarial feedback, where the true preference label can be flipped by an adversary. We propose an algorithm namely robust contextual dueling bandits (RCDB), which is based on uncertainty-weighted maximum likelihood estimation. Our algorithm achieves an $ ilde O(dsqrt{T}/kappa+dC/kappa)$ regret bound, where $T$ is the number of rounds, $d$ is the dimension of the context, $kappa$ is the lower bound of the derivative of the link function, and $ 0 le C le T$ is the total number of adversarial feedback. We also prove a lower bound to show that our regret bound is nearly optimal, both in scenarios with and without ($C=0$) adversarial feedback. Our work is the first to achieve nearly minimax optimal regret for dueling bandits in the presence of adversarial preference feedback. Additionally, for the sigmoid link function, we develop a novel algorithm that takes into account the effect of local derivatives into maximum likelihood estimation (MLE) analysis through a refined method for estimating the link function's derivative. This method helps us to eliminate the $kappa$ dependence in the leading term with respect to $T$, which reduces the exponential dependence on the parameter radius $B$ to a polynomial dependence.