Reliable Abstention under Adversarial Injections: Tight Lower Bounds and New Upper Bounds

📅 2026-02-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the problem of online prediction under a mixed stochastic–adversarial data stream in the presence of abstention, where an adversary may inject corrupted samples. The authors propose a potential-based framework grounded in robust witness sets, which enables certified predictions and resilience against contamination without requiring prior knowledge of the data distribution. They establish the first Ω(√T) lower bound for VC dimension 1, revealing a fundamental gap between different information models. Introducing the novel notion of “certificate dimension” and integrating it with inference dimension, they develop a general algorithmic framework. For two-dimensional halfspaces, they achieve the first distribution-free Õ(T^{2/3}) bound on the combined error, and extend this result to general hypothesis classes, attaining Õ(T^{1−1/k}), thereby significantly advancing the theoretical foundations of abstention-capable learning in adversarial environments.

Technology Category

Application Category

📝 Abstract
We study online learning in the adversarial injection model introduced by [Goel et al. 2017], where a stream of labeled examples is predominantly drawn i.i.d.\ from an unknown distribution $\mathcal{D}$, but may be interspersed with adversarially chosen instances without the learner knowing which rounds are adversarial. Crucially, labels are always consistent with a fixed target concept (the clean-label setting). The learner is additionally allowed to abstain from predicting, and the total error counts the mistakes whenever the learner decides to predict and incorrect abstentions when it abstains on i.i.d.\ rounds. Perhaps surprisingly, prior work shows that oracle access to the underlying distribution yields $O(d^2 \log T)$ combined error for VC dimension $d$, while distribution-agnostic algorithms achieve only $\tilde{O}(\sqrt{T})$ for restricted classes, leaving open whether this gap is fundamental. We resolve this question by proving a matching $Ω(\sqrt{T})$ lower bound for VC dimension $1$, establishing a sharp separation between the two information regimes. On the algorithmic side, we introduce a potential-based framework driven by \emph{robust witnesses}, small subsets of labeled examples that certify predictions while remaining resilient to adversarial contamination. We instantiate this framework using two combinatorial dimensions: (1) \emph{inference dimension}, yielding combined error $\tilde{O}(T^{1-1/k})$ for classes of inference dimension $k$, and (2) \emph{certificate dimension}, a new relaxation we introduce. As an application, we show that halfspaces in $\mathbb{R}^2$ have certificate dimension $3$, obtaining the first distribution-agnostic bound of $\tilde{O}(T^{2/3})$ for this class. This is notable since [Blum et al. 2021] showed halfspaces are not robustly learnable under clean-label attacks without abstention.
Problem

Research questions and friction points this paper is trying to address.

adversarial injections
online learning
abstention
distribution-agnostic learning
VC dimension
Innovation

Methods, ideas, or system contributions that make the work stand out.

adversarial injection
abstention
robust witnesses
certificate dimension
online learning
🔎 Similar Papers
No similar papers found.