LMI-Net: Linear Matrix Inequality--Constrained Neural Networks via Differentiable Projection Layers

📅 2026-04-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing deep learning approaches struggle to enforce linear matrix inequality (LMI) constraints strictly within neural networks, thereby lacking formal guarantees such as stability and robustness. This work proposes the first modular, differentiable LMI projection layer that structurally ensures hard constraint satisfaction by modeling LMIs as the intersection of affine equalities and the positive semidefinite cone. The forward pass leverages the Douglas–Rachford splitting algorithm, while gradients are computed efficiently via implicit differentiation. The method is theoretically guaranteed to converge to a feasible solution and demonstrates substantial improvements over soft-constraint baselines in tasks such as invariant ellipsoid synthesis and joint controller-certificate design for perturbed systems, maintaining high feasibility and fast inference even under distributional shifts.
📝 Abstract
Linear matrix inequalities (LMIs) have played a central role in certifying stability, robustness, and forward invariance of dynamical systems. Despite rapid development in learning-based methods for control design and certificate synthesis, existing approaches often fail to preserve the hard matrix inequality constraints required for formal guarantees. We propose LMI-Net, an efficient and modular differentiable projection layer that enforces LMI constraints by construction. Our approach lifts the set defined by LMI constraints into the intersection of an affine equality constraint and the positive semidefinite cone, performs the forward pass via Douglas-Rachford splitting, and supports efficient backward propagation through implicit differentiation. We establish theoretical guarantees that the projection layer converges to a feasible point, certifying that LMI-Net transforms a generic neural network into a reliable model satisfying LMI constraints. Evaluated on experiments including invariant ellipsoid synthesis and joint controller-and-certificate design for a family of disturbed linear systems, LMI-Net substantially improves feasibility over soft-constrained models under distribution shift while retaining fast inference speed, bridging semidefinite-program-based certification and modern learning techniques.
Problem

Research questions and friction points this paper is trying to address.

Linear Matrix Inequalities
Neural Networks
Constraint Satisfaction
Formal Guarantees
Dynamical Systems
Innovation

Methods, ideas, or system contributions that make the work stand out.

Linear Matrix Inequalities
Differentiable Projection
Neural Network Certification
Douglas-Rachford Splitting
Implicit Differentiation
S
Sunbochen Tang
Laboratory for Information and Decision Systems (LIDS), Massachusetts Institute of Technology, Cambridge, MA 02139, USA
A
Andrea Goertzen
Laboratory for Information and Decision Systems (LIDS), Massachusetts Institute of Technology, Cambridge, MA 02139, USA
Navid Azizan
Navid Azizan
Alfred H. (1929) and Jean M. Hayes Assistant Professor, MIT
AIOptimizationLearning and ControlSignals and SystemsTrustworthy Autonomy