🤖 AI Summary
Randomized smoothing incurs prohibitive computational cost—requiring ∼10⁵ forward passes—due to its high sample complexity in statistical estimation for robustness certification. This work addresses the fundamental problem of minimizing the number of samples needed to reliably determine whether a given input is certifiably robust within a specified ℓ₂ radius, under strict statistical guarantees. We propose an adaptive sequential sampling framework grounded in confidence sequences, achieving theoretically optimal sample complexity. Additionally, we introduce a randomized Clopper–Pearson confidence interval that yields significantly tighter robustness certificates. Experiments demonstrate over a tenfold reduction in required samples, improved certification rates, and substantially lower computational overhead—all while preserving rigorous, provable statistical reliability.
📝 Abstract
Randomized smoothing is a popular certified defense against adversarial attacks. In its essence, we need to solve a problem of statistical estimation which is usually very time-consuming since we need to perform numerous (usually $10^5$) forward passes of the classifier for every point to be certified. In this paper, we review the statistical estimation problems for randomized smoothing to find out if the computational burden is necessary. In particular, we consider the (standard) task of adversarial robustness where we need to decide if a point is robust at a certain radius or not using as few samples as possible while maintaining statistical guarantees. We present estimation procedures employing confidence sequences enjoying the same statistical guarantees as the standard methods, with the optimal sample complexities for the estimation task and empirically demonstrate their good performance. Additionally, we provide a randomized version of Clopper-Pearson confidence intervals resulting in strictly stronger certificates.