π€ AI Summary
To address the unclear mechanisms underlying adversarial robust generalization, this paper proposes the Weight Curvature Index (WCI), the first unified metric jointly quantifying parameter scale (i.e., the Frobenius norm of weights) and loss curvature (i.e., the trace of the Hessian). Based on the PAC-Bayes framework and second-order Taylor approximation, we derive an analytically tractable robust generalization bound, explicitly revealing the theoretical interplay among curvature, parameters, and robustness. Empirical evaluation demonstrates that WCI exhibits strong correlation with robust accuracy across multiple datasets (Spearmanβs Ο > 0.92), significantly outperforming conventional complexity measures. Leveraging this insight, we design a low-curvature regularization strategy that improves robust generalization performance by up to 3.7%.
π Abstract
Despite extensive research on adversarial examples, the underlying mechanisms of adversarially robust generalization, a critical yet challenging task for deep learning, remain largely unknown. In this work, we propose a novel perspective to decipher adversarially robust generalization through the lens of the Weight-Curvature Index (WCI). The proposed WCI quantifies the vulnerability of models to adversarial perturbations using the Frobenius norm of weight matrices and the trace of Hessian matrices. We prove generalization bounds based on PAC-Bayesian theory and second-order loss function approximations to elucidate the interplay between robust generalization gap, model parameters, and loss landscape curvature. Our theory and experiments show that WCI effectively captures the robust generalization performance of adversarially trained models. By offering a nuanced understanding of adversarial robustness based on the scale of model parameters and the curvature of the loss landscape, our work provides crucial insights for designing more resilient deep learning models, enhancing their reliability and security.