Non-Parametric Probabilistic Robustness: A Conservative Metric with Optimized Perturbation Distributions

📅 2025-11-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing probabilistic robustness (PR) metrics rely on pre-specified, fixed perturbation distributions, failing to capture distributional uncertainty inherent in real-world scenarios. To address this, we propose Non-Parametric Probabilistic Robustness (NPPR), the first PR framework that incorporates non-parametric principles—learning input-adaptive perturbation distributions directly from data to enable conservative robustness estimation under distributional uncertainty. Our method models the perturbation distribution via a Gaussian Mixture Model (GMM) and enhances estimation accuracy using an MLP-based prediction head coupled with bicubic upsampling. Extensive evaluation across multiple datasets—including CIFAR-10, CIFAR-100, and Tiny ImageNet—and diverse model architectures demonstrates that NPPR yields, on average, up to 40% more conservative and practically meaningful robustness bounds compared to state-of-the-art methods, significantly improving the reliability of robustness assessment.

Technology Category

Application Category

📝 Abstract
Deep learning (DL) models, despite their remarkable success, remain vulnerable to small input perturbations that can cause erroneous outputs, motivating the recent proposal of probabilistic robustness (PR) as a complementary alternative to adversarial robustness (AR). However, existing PR formulations assume a fixed and known perturbation distribution, an unrealistic expectation in practice. To address this limitation, we propose non-parametric probabilistic robustness (NPPR), a more practical PR metric that does not rely on any predefined perturbation distribution. Following the non-parametric paradigm in statistical modeling, NPPR learns an optimized perturbation distribution directly from data, enabling conservative PR evaluation under distributional uncertainty. We further develop an NPPR estimator based on a Gaussian Mixture Model (GMM) with Multilayer Perceptron (MLP) heads and bicubic up-sampling, covering various input-dependent and input-independent perturbation scenarios. Theoretical analyses establish the relationships among AR, PR, and NPPR. Extensive experiments on CIFAR-10, CIFAR-100, and Tiny ImageNet across ResNet18/50, WideResNet50 and VGG16 validate NPPR as a more practical robustness metric, showing up to 40% more conservative (lower) PR estimates compared to assuming those common perturbation distributions used in state-of-the-arts.
Problem

Research questions and friction points this paper is trying to address.

Addressing unrealistic fixed perturbation distribution assumptions in probabilistic robustness
Proposing non-parametric probabilistic robustness with data-learned perturbation distributions
Developing conservative robustness metrics under distributional uncertainty scenarios
Innovation

Methods, ideas, or system contributions that make the work stand out.

Learns optimized perturbation distribution from data
Uses GMM with MLP heads for estimation
Provides conservative robustness under distributional uncertainty
🔎 Similar Papers
No similar papers found.