🤖 AI Summary
Stochastic multi-armed bandits (MABs) exhibit critical vulnerability in realistic adversarial settings, yet existing threat models—e.g., unbounded per-round manipulation—are overly restrictive or unrealistic.
Method: We propose a practical “false data injection” threat model where an attacker injects bounded-magnitude, syntactically valid fake feedback samples into historical observations, subject to strict constraints on total injection count and timing. Unlike prior work, our model formally captures *bounded magnitude*, *limited budget*, and *temporal controllability*. Leveraging confidence-interval analysis and Bayesian posterior perturbation, we design provably effective attacks against UCB and Thompson Sampling.
Results: Our attacks force the target arm to be selected with up to 98% frequency under sublinear attack cost (i.e., o(T)), as validated on both synthetic and real-world datasets. This exposes severe security flaws in standard stochastic bandit algorithms under realistic adversarial conditions.
📝 Abstract
Adversarial attacks on stochastic bandits have traditionally relied on some unrealistic assumptions, such as per-round reward manipulation and unbounded perturbations, limiting their relevance to real-world systems. We propose a more practical threat model, Fake Data Injection, which reflects realistic adversarial constraints: the attacker can inject only a limited number of bounded fake feedback samples into the learner's history, simulating legitimate interactions. We design efficient attack strategies under this model, explicitly addressing both magnitude constraints (on reward values) and temporal constraints (on when and how often data can be injected). Our theoretical analysis shows that these attacks can mislead both Upper Confidence Bound (UCB) and Thompson Sampling algorithms into selecting a target arm in nearly all rounds while incurring only sublinear attack cost. Experiments on synthetic and real-world datasets validate the effectiveness of our strategies, revealing significant vulnerabilities in widely used stochastic bandit algorithms under practical adversarial scenarios.