Probably Approximately Global Robustness Certification

📅 2025-11-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Formal verification methods for classifier adversarial robustness suffer from high computational cost, while sampling-based approaches lack theoretical guarantees. Method: This paper proposes a probabilistic global robustness certification framework that employs an ε-net to structure input-space sampling and leverages a local robustness oracle—without requiring access to or unfolding the model’s internal architecture. Contribution/Results: Its sample complexity is the first to be independent of input dimensionality, number of classes, and network size, thereby overcoming scalability bottlenecks inherent in prior verification methods. Theoretically, the framework provides rigorous probabilistic guarantees (e.g., certifying global robustness with probability at least 1−δ). Empirically, it significantly outperforms existing sampling- and formal-based methods in both robustness coverage and computational efficiency, especially on large-scale deep neural networks.

Technology Category

Application Category

📝 Abstract
We propose and investigate probabilistic guarantees for the adversarial robustness of classification algorithms. While traditional formal verification approaches for robustness are intractable and sampling-based approaches do not provide formal guarantees, our approach is able to efficiently certify a probabilistic relaxation of robustness. The key idea is to sample an $epsilon$-net and invoke a local robustness oracle on the sample. Remarkably, the size of the sample needed to achieve probably approximately global robustness guarantees is independent of the input dimensionality, the number of classes, and the learning algorithm itself. Our approach can, therefore, be applied even to large neural networks that are beyond the scope of traditional formal verification. Experiments empirically confirm that it characterizes robustness better than state-of-the-art sampling-based approaches and scales better than formal methods.
Problem

Research questions and friction points this paper is trying to address.

Providing probabilistic guarantees for adversarial robustness of classifiers
Certifying robustness efficiently using sampling and local oracles
Scaling verification to large networks beyond traditional formal methods
Innovation

Methods, ideas, or system contributions that make the work stand out.

Sampling epsilon-nets for probabilistic robustness certification
Using local robustness oracles on sampled points
Achieving guarantees independent of dimensionality and algorithm
🔎 Similar Papers
No similar papers found.