Reconcile Certified Robustness and Accuracy for DNN-based Smoothed Majority Vote Classifier

📅 2025-09-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates the interplay between certified robustness and generalization of smoothed majority-voting classifiers within the PAC-Bayesian framework. Existing approaches suffer from dimension-dependent guarantees and struggle to jointly optimize robustness and accuracy. To address this, we propose the first dimension-agnostic spectral regularization strategy for smoothed voting by incorporating the spectral norm of classifier weights into the architecture. Coupled with spherical Gaussian input smoothing during training, we derive tight, unified theoretical upper bounds that simultaneously characterize both the certified robust radius and the generalization error. Experiments on standard benchmarks—including CIFAR-10—demonstrate that our method significantly improves certified accuracy (+2.3%) while preserving natural accuracy (±0.5%), thereby achieving an effective trade-off between robustness and generalization.

Technology Category

Application Category

📝 Abstract
Within the PAC-Bayesian framework, the Gibbs classifier (defined on a posterior $Q$) and the corresponding $Q$-weighted majority vote classifier are commonly used to analyze the generalization performance. However, there exists a notable lack in theoretical research exploring the certified robustness of majority vote classifier and its interplay with generalization. In this study, we develop a generalization error bound that possesses a certified robust radius for the smoothed majority vote classifier (i.e., the $Q$-weighted majority vote classifier with smoothed inputs); In other words, the generalization bound holds under any data perturbation within the certified robust radius. As a byproduct, we find that the underpinnings of both the generalization bound and the certified robust radius draw, in part, upon weight spectral norm, which thereby inspires the adoption of spectral regularization in smooth training to boost certified robustness. Utilizing the dimension-independent property of spherical Gaussian inputs in smooth training, we propose a novel and inexpensive spectral regularizer to enhance the smoothed majority vote classifier. In addition to the theoretical contribution, a set of empirical results is provided to substantiate the effectiveness of our proposed method.
Problem

Research questions and friction points this paper is trying to address.

Reconcile certified robustness and accuracy for smoothed majority vote classifiers
Develop generalization error bound with certified robust radius
Propose spectral regularization to enhance certified robustness
Innovation

Methods, ideas, or system contributions that make the work stand out.

Developed certified robust generalization bound for smoothed majority vote
Proposed inexpensive spectral regularizer using spherical Gaussian inputs
Enhanced certified robustness via spectral regularization in smooth training
🔎 Similar Papers
No similar papers found.