Enhancing Certified Robustness via Block Reflector Orthogonal Layers and Logit Annealing Loss

📅 2025-05-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the limited expressive power of Lipschitz-constrained networks in deep learning model certification, this paper proposes BRONet—a highly expressive, provably robust neural network. Methodologically, we introduce the Block Reflector Orthogonal (BRO) layer, which enforces tight and efficient orthogonal weight constraints via block reflector matrix decomposition; we further design Logit Annealing loss, a progressive logit-scaling mechanism that jointly optimizes classification margin and Lipschitz constant tightness. Theoretically, BRONet’s Lipschitz constant is precisely characterizable. Empirically, BRONet achieves state-of-the-art certified robust accuracy on CIFAR-10/100, Tiny-ImageNet, and ImageNet, significantly outperforming existing Lipschitz-bounded network baselines.

Technology Category

Application Category

📝 Abstract
Lipschitz neural networks are well-known for providing certified robustness in deep learning. In this paper, we present a novel, efficient Block Reflector Orthogonal (BRO) layer that enhances the capability of orthogonal layers on constructing more expressive Lipschitz neural architectures. In addition, by theoretically analyzing the nature of Lipschitz neural networks, we introduce a new loss function that employs an annealing mechanism to increase margin for most data points. This enables Lipschitz models to provide better certified robustness. By employing our BRO layer and loss function, we design BRONet - a simple yet effective Lipschitz neural network that achieves state-of-the-art certified robustness. Extensive experiments and empirical analysis on CIFAR-10/100, Tiny-ImageNet, and ImageNet validate that our method outperforms existing baselines. The implementation is available at href{https://github.com/ntuaislab/BRONet}{https://github.com/ntuaislab/BRONet}.
Problem

Research questions and friction points this paper is trying to address.

Enhancing certified robustness in Lipschitz neural networks
Designing expressive orthogonal layers for Lipschitz architectures
Improving margin via annealing loss for better robustness
Innovation

Methods, ideas, or system contributions that make the work stand out.

Block Reflector Orthogonal layer enhances Lipschitz networks
Logit annealing loss increases margin for robustness
BRONet achieves state-of-the-art certified robustness performance
B
Bo-Han Lai
Department of Computer Science and Information Engineering, National Taiwan University, Taiwan
P
Pin-Han Huang
Department of Computer Science and Information Engineering, National Taiwan University, Taiwan
B
Bo-Han Kung
Department of Computer Science and Information Engineering, National Taiwan University, Taiwan
Shang-Tse Chen
Shang-Tse Chen
Associate Professor, National Taiwan University
Machine LearningArtificial IntelligenceSecurityAlgorithmic Game TheoryData Science