Tight Robustness Certificates and Wasserstein Distributional Attacks for Deep Neural Networks

📅 2025-10-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing Wasserstein distributionally robust optimization (WDRO) methods rely on global Lipschitz continuity or strong duality, leading to overly loose robustness certificates or prohibitive computational costs. This paper proposes a novel WDRO framework grounded in primal problem solving: leveraging the piecewise affine structure of ReLU networks, it constructs exact Lipschitz certificates, enabling tight and computationally tractable characterization of the WDRO problem; it further introduces distribution-level Wasserstein adversarial attacks—going beyond conventional pointwise attacks—to synthesize worst-case perturbed distributions. Experiments demonstrate that the method maintains high certified robust accuracy while significantly tightening robustness bounds and generating more threatening distributional adversarial examples. This work establishes a new paradigm for analyzing and verifying distributional robustness of deep neural networks.

Technology Category

Application Category

📝 Abstract
Wasserstein distributionally robust optimization (WDRO) provides a framework for adversarial robustness, yet existing methods based on global Lipschitz continuity or strong duality often yield loose upper bounds or require prohibitive computation. In this work, we address these limitations by introducing a primal approach and adopting a notion of exact Lipschitz certificate to tighten this upper bound of WDRO. In addition, we propose a novel Wasserstein distributional attack (WDA) that directly constructs a candidate for the worst-case distribution. Compared to existing point-wise attack and its variants, our WDA offers greater flexibility in the number and location of attack points. In particular, by leveraging the piecewise-affine structure of ReLU networks on their activation cells, our approach results in an exact tractable characterization of the corresponding WDRO problem. Extensive evaluations demonstrate that our method achieves competitive robust accuracy against state-of-the-art baselines while offering tighter certificates than existing methods. Our code is available at https://github.com/OLab-Repo/WDA
Problem

Research questions and friction points this paper is trying to address.

Tightening robustness certificates for Wasserstein distributionally robust optimization
Developing flexible Wasserstein distributional attacks for neural networks
Providing exact tractable characterization of WDRO for ReLU networks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Primal approach tightens WDRO upper bound
Wasserstein distributional attack constructs worst-case distribution
Leverages piecewise-affine structure of ReLU networks
🔎 Similar Papers
No similar papers found.
B
Bach C. Le
College of Engineering & Computer Science, VinUniversity, Hanoi, Vietnam
T
Tung V. Dao
College of Engineering & Computer Science, VinUniversity, Hanoi, Vietnam
Binh T. Nguyen
Binh T. Nguyen
VinUniversity
statisticsoptimal transport
Hong T.M. Chu
Hong T.M. Chu
College of Engineering & Computer Science, VinUniversity
OptimizationOptimal transportMachine learning