🤖 AI Summary
Existing deep learning approaches struggle to enforce linear matrix inequality (LMI) constraints strictly within neural networks, thereby lacking formal guarantees such as stability and robustness. This work proposes the first modular, differentiable LMI projection layer that structurally ensures hard constraint satisfaction by modeling LMIs as the intersection of affine equalities and the positive semidefinite cone. The forward pass leverages the Douglas–Rachford splitting algorithm, while gradients are computed efficiently via implicit differentiation. The method is theoretically guaranteed to converge to a feasible solution and demonstrates substantial improvements over soft-constraint baselines in tasks such as invariant ellipsoid synthesis and joint controller-certificate design for perturbed systems, maintaining high feasibility and fast inference even under distributional shifts.
📝 Abstract
Linear matrix inequalities (LMIs) have played a central role in certifying stability, robustness, and forward invariance of dynamical systems. Despite rapid development in learning-based methods for control design and certificate synthesis, existing approaches often fail to preserve the hard matrix inequality constraints required for formal guarantees. We propose LMI-Net, an efficient and modular differentiable projection layer that enforces LMI constraints by construction. Our approach lifts the set defined by LMI constraints into the intersection of an affine equality constraint and the positive semidefinite cone, performs the forward pass via Douglas-Rachford splitting, and supports efficient backward propagation through implicit differentiation. We establish theoretical guarantees that the projection layer converges to a feasible point, certifying that LMI-Net transforms a generic neural network into a reliable model satisfying LMI constraints. Evaluated on experiments including invariant ellipsoid synthesis and joint controller-and-certificate design for a family of disturbed linear systems, LMI-Net substantially improves feasibility over soft-constrained models under distribution shift while retaining fast inference speed, bridging semidefinite-program-based certification and modern learning techniques.