🤖 AI Summary
Existing robust conformal prediction methods rely heavily on Monte Carlo sampling to guarantee coverage under adversarial perturbations, incurring prohibitive computational cost. This paper proposes a binary-sampling strategy coupled with a single binary certification mechanism, drastically reducing the required number of samples (e.g., only 150 for CIFAR-10) while yielding smaller robust prediction sets and strictly maintaining user-specified coverage levels. Key contributions include: (i) the first method to provide theoretical robustness guarantees using only a single binary certificate; (ii) elimination of the assumption that the underlying score function is bounded; and (iii) support for adaptive threshold optimization to jointly balance coverage and set size. Rigorous theoretical analysis ensures strict validity of the coverage guarantee. Empirical results demonstrate substantial computational speedup—up to orders of magnitude—while preserving statistical validity.
📝 Abstract
Conformal prediction (CP) converts any model's output to prediction sets with a guarantee to cover the true label with (adjustable) high probability. Robust CP extends this guarantee to worst-case (adversarial) inputs. Existing baselines achieve robustness by bounding randomly smoothed conformity scores. In practice, they need expensive Monte-Carlo (MC) sampling (e.g. $sim10^4$ samples per point) to maintain an acceptable set size. We propose a robust conformal prediction that produces smaller sets even with significantly lower MC samples (e.g. 150 for CIFAR10). Our approach binarizes samples with an adjustable (or automatically adjusted) threshold selected to preserve the coverage guarantee. Remarkably, we prove that robustness can be achieved by computing only one binary certificate, unlike previous methods that certify each calibration (or test) point. Thus, our method is faster and returns smaller robust sets. We also eliminate a previous limitation that requires a bounded score function.