Enhancing Certifiable Semantic Robustness via Robust Pruning of Deep Neural Networks

📅 2025-09-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Certified robustness of deep neural networks against semantic transformations (e.g., brightness and contrast changes) suffers from over-parameterization, hindering tight and efficient robustness certification. Method: This paper proposes a structured pruning framework tailored for certifiable robustness. It introduces the Unbiased Smooth Neuron (USN) metric to quantify neuron-level sensitivity to semantic perturbations and guides pruning accordingly. Additionally, it incorporates a Wasserstein-distance-based regularization term into the loss function to enforce concentration of neuron response distributions post-pruning, thereby improving certification tightness. The method jointly leverages layer-wise stability analysis and neuron-wise variance modeling. Contribution/Results: The approach achieves improved model sparsity without compromising verifiable robustness. Evaluated on keypoint detection, it significantly boosts certified coverage and computational efficiency over state-of-the-art baselines.

Technology Category

Application Category

📝 Abstract
Deep neural networks have been widely adopted in many vision and robotics applications with visual inputs. It is essential to verify its robustness against semantic transformation perturbations, such as brightness and contrast. However, current certified training and robustness certification methods face the challenge of over-parameterization, which hinders the tightness and scalability due to the over-complicated neural networks. To this end, we first analyze stability and variance of layers and neurons against input perturbation, showing that certifiable robustness can be indicated by a fundamental Unbiased and Smooth Neuron metric (USN). Based on USN, we introduce a novel neural network pruning method that removes neurons with low USN and retains those with high USN, thereby preserving model expressiveness without over-parameterization. To further enhance this pruning process, we propose a new Wasserstein distance loss to ensure that pruned neurons are more concentrated across layers. We validate our approach through extensive experiments on the challenging robust keypoint detection task, which involves realistic brightness and contrast perturbations, demonstrating that our method achieves superior robustness certification performance and efficiency compared to baselines.
Problem

Research questions and friction points this paper is trying to address.

Addresses over-parameterization in neural network robustness certification
Enhances certifiable robustness against semantic transformations like brightness
Improves tightness and scalability of certified training methods
Innovation

Methods, ideas, or system contributions that make the work stand out.

Pruning neurons using Unbiased Smooth Neuron metric
Applying Wasserstein distance loss for concentration
Enhancing robustness certification via layer-wise pruning
🔎 Similar Papers
No similar papers found.
Hanjiang Hu
Hanjiang Hu
Carnegie Mellon University
Machine LearningControlRobotics
B
Bowei Li
Robotics Institute, Carnegie Mellon University
Z
Ziwei Wang
Robotics Institute, Carnegie Mellon University; School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore
Tianhao Wei
Tianhao Wei
Carnegie Mellon University
RoboticsMachine LearningControl
C
Casidhe Hutchison
Robotics Institute, Carnegie Mellon University
E
Eric Sample
Robotics Institute, Carnegie Mellon University
Changliu Liu
Changliu Liu
Associate Professor, Carnegie Mellon University
Roboticshuman-robot interactionsmotion planningoptimizationmulti-agent system