On the Adversarial Robustness of Online Importance Sampling

📅 2025-07-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work studies the robustness of online importance sampling under fully adaptive adversarial streams—where the data stream evolves dynamically in response to the algorithm’s historical outputs—and asks whether high-probability accuracy guarantees can still be achieved. Addressing the breakdown of standard stochastic analysis due to element-wise dependencies, we establish, for the first time, the existence of an online importance sampling algorithm that achieves $(1pmvarepsilon)$-approximation with high probability, while attaining optimal space complexity—matching that of the non-adaptive setting. Our core techniques integrate adversarially robust stream design with dynamic calibration of importance weights. We apply the framework to hypergraph cut sparsification and $ell_p$ subspace embedding, achieving near-optimal accuracy–space trade-offs. Crucially, this yields the first theoretical framework for online importance sampling with provable strong adversarial robustness guarantees.

Technology Category

Application Category

📝 Abstract
This paper studies the adversarial-robustness of importance-sampling (aka sensitivity sampling); a useful algorithmic technique that samples elements with probabilities proportional to some measure of their importance. A streaming or online algorithm is called adversarially-robust if it succeeds with high probability on input streams that may change adaptively depending on previous algorithm outputs. Unfortunately, the dependence between stream elements breaks the analysis of most randomized algorithms, and in particular that of importance-sampling algorithms. Previously, Braverman et al. [NeurIPS 2021] suggested that streaming algorithms based on importance-sampling may be adversarially-robust; however, they proved it only for well-behaved inputs. We focus on the adversarial-robustness of online importance-sampling, a natural variant where sampling decisions are irrevocable and made as data arrives. Our main technical result shows that, given as input an adaptive stream of elements $x_1,ldots,x_Tin mathbb{R}_+$, online importance-sampling maintains a $(1pmε)$-approximation of their sum while matching (up to lower order terms) the storage guarantees of the oblivious (non-adaptive) case. We then apply this result to develop adversarially-robust online algorithms for two fundamental problems: hypergraph cut sparsification and $ell_p$ subspace embedding.
Problem

Research questions and friction points this paper is trying to address.

Study adversarial robustness of online importance sampling
Analyze adaptive streams with irrevocable sampling decisions
Develop robust algorithms for hypergraph cut and subspace embedding
Innovation

Methods, ideas, or system contributions that make the work stand out.

Online importance-sampling for adaptive streams
Maintains (1±ε)-approximation of sum
Matches oblivious case storage guarantees
🔎 Similar Papers
No similar papers found.