A Scalable Approach for Safe and Robust Learning via Lipschitz-Constrained Networks

📅 2025-06-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenge of balancing certified robustness and scalability in safety-critical neural network applications, this paper proposes a Lipschitz-constrained scalable convex training framework. Methodologically, it (1) reformulates non-convex robust training as a convex optimization problem via loop transformation and semidefinite relaxation; and (2) introduces a Randomized Subspace Linear Matrix Inequality (RS-LMI) technique that decomposes global LMI constraints—governing Lipschitz continuity—into tractable local constraints through hierarchical low-dimensional projections and sketching. This avoids the prohibitive computational complexity of conventional global semidefinite programming. Empirically, on MNIST, CIFAR-10, and ImageNet, the framework achieves high classification accuracy while significantly tightening Lipschitz bounds and accelerating both training and certification. The results demonstrate its effectiveness, certified robustness, and practical scalability for large-scale deep learning models.

Technology Category

Application Category

📝 Abstract
Certified robustness is a critical property for deploying neural networks (NN) in safety-critical applications. A principle approach to achieving such guarantees is to constrain the global Lipschitz constant of the network. However, accurate methods for Lipschitz-constrained training often suffer from non-convex formulations and poor scalability due to reliance on global semidefinite programs (SDPs). In this letter, we propose a convex training framework that enforces global Lipschitz constraints via semidefinite relaxation. By reparameterizing the NN using loop transformation, we derive a convex admissibility condition that enables tractable and certifiable training. While the resulting formulation guarantees robustness, its scalability is limited by the size of global SDP. To overcome this, we develop a randomized subspace linear matrix inequalities (RS-LMI) approach that decomposes the global constraints into sketched layerwise constraints projected onto low-dimensional subspaces, yielding a smooth and memory-efficient training objective. Empirical results on MNIST, CIFAR-10, and ImageNet demonstrate that the proposed framework achieves competitive accuracy with significantly improved Lipschitz bounds and runtime performance.
Problem

Research questions and friction points this paper is trying to address.

Ensuring neural network robustness via Lipschitz constraints
Overcoming scalability issues in global Lipschitz training
Proposing convex training with efficient subspace decomposition
Innovation

Methods, ideas, or system contributions that make the work stand out.

Convex training framework with Lipschitz constraints
Loop transformation for tractable certification
Randomized subspace LMI for scalability
🔎 Similar Papers
No similar papers found.