Efficient Adversarial Attacks on High-dimensional Offline Bandits

📅 2026-02-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the vulnerability of offline bandit algorithms in high-dimensional settings, where even minute adversarial perturbations to reward model weights can significantly manipulate decision-making. We introduce a novel threat model tailored to offline bandits and theoretically demonstrate that the required perturbation norm decreases as dimensionality increases. Through systematic analysis, we characterize how weight perturbations affect policy behavior under both linear and ReLU neural network reward functions. Leveraging high-dimensional optimization and adversarial perturbation generation techniques, we achieve nearly 100% attack success rates on Hugging Face image aesthetic and compositional alignment evaluation tasks—substantially outperforming random perturbations. This study provides the first empirical evidence of the high sensitivity of high-dimensional offline bandit systems to structured weight-space attacks.

Technology Category

Application Category

📝 Abstract
Bandit algorithms have recently emerged as a powerful tool for evaluating machine learning models, including generative image models and large language models, by efficiently identifying top-performing candidates without exhaustive comparisons. These methods typically rely on a reward model, often distributed with public weights on platforms such as Hugging Face, to provide feedback to the bandit. While online evaluation is expensive and requires repeated trials, offline evaluation with logged data has become an attractive alternative. However, the adversarial robustness of offline bandit evaluation remains largely unexplored, particularly when an attacker perturbs the reward model (rather than the training data) prior to bandit training. In this work, we fill this gap by investigating, both theoretically and empirically, the vulnerability of offline bandit training to adversarial manipulations of the reward model. We introduce a novel threat model in which an attacker exploits offline data in high-dimensional settings to hijack the bandit's behavior. Starting with linear reward functions and extending to nonlinear models such as ReLU neural networks, we study attacks on two Hugging Face evaluators used for generative model assessment: one measuring aesthetic quality and the other assessing compositional alignment. Our results show that even small, imperceptible perturbations to the reward model's weights can drastically alter the bandit's behavior. From a theoretical perspective, we prove a striking high-dimensional effect: as input dimensionality increases, the perturbation norm required for a successful attack decreases, making modern applications such as image evaluation especially vulnerable. Extensive experiments confirm that naive random perturbations are ineffective, whereas carefully targeted perturbations achieve near-perfect attack success rates ...
Problem

Research questions and friction points this paper is trying to address.

adversarial attacks
offline bandits
reward model
high-dimensional
robustness
Innovation

Methods, ideas, or system contributions that make the work stand out.

adversarial attacks
offline bandits
reward model perturbation
high-dimensional vulnerability
generative model evaluation
🔎 Similar Papers
No similar papers found.