Two-Stage Learning of Stabilizing Neural Controllers via Zubov Sampling and Iterative Domain Expansion

📅 2025-06-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Neural controllers for continuous-time nonlinear systems often lack formal stability guarantees and scalable estimates of their regions of attraction (ROAs). Method: This paper proposes a two-stage learning framework that jointly synthesizes a neural controller and a Lyapunov function, characterizing the ROA via Zubov’s equation. It introduces a Zubov-guided sampling strategy and an iterative domain expansion mechanism; extends the α,β-CROWN verifier to handle Jacobian chain propagation in dynamical systems—replacing computationally expensive bisection search—and integrates automatic bound propagation to accelerate verification. Results: Experiments demonstrate that the computed ROAs are 5–1.5×10⁵ times larger in volume than those obtained by baseline methods. Moreover, formal verification on continuous systems is 40–10,000× faster than with dReal.

Technology Category

Application Category

📝 Abstract
Learning-based neural network (NN) control policies have shown impressive empirical performance. However, obtaining stability guarantees and estimations of the region of attraction of these learned neural controllers is challenging due to the lack of stable and scalable training and verification algorithms. Although previous works in this area have achieved great success, much conservatism remains in their framework. In this work, we propose a novel two-stage training framework to jointly synthesize the controller and Lyapunov function for continuous-time systems. By leveraging a Zubov-inspired region of attraction characterization to directly estimate stability boundaries, we propose a novel training data sampling strategy and a domain updating mechanism that significantly reduces the conservatism in training. Moreover, unlike existing works on continuous-time systems that rely on an SMT solver to formally verify the Lyapunov condition, we extend state-of-the-art neural network verifier $alpha,!eta$-CROWN with the capability of performing automatic bound propagation through the Jacobian of dynamical systems and a novel verification scheme that avoids expensive bisection. To demonstrate the effectiveness of our approach, we conduct numerical experiments by synthesizing and verifying controllers on several challenging nonlinear systems across multiple dimensions. We show that our training can yield region of attractions with volume $5 - 1.5cdot 10^{5}$ times larger compared to the baselines, and our verification on continuous systems can be up to $40-10000$ times faster compared to the traditional SMT solver dReal. Our code is available at https://github.com/Verified-Intelligence/Two-Stage_Neural_Controller_Training.
Problem

Research questions and friction points this paper is trying to address.

Develop stable neural controllers with Lyapunov guarantees
Reduce conservatism in training via Zubov sampling
Enable faster verification without SMT solvers
Innovation

Methods, ideas, or system contributions that make the work stand out.

Two-stage training for neural controllers
Zubov-inspired sampling reduces conservatism
Neural verifier with Jacobian bound propagation
🔎 Similar Papers
No similar papers found.
H
Haoyu Li
University of Illinois Urbana-Champaign
Xiangru Zhong
Xiangru Zhong
University of Illinois Urbana-Champaign
B
Bin Hu
University of Illinois Urbana-Champaign
H
Huan Zhang
University of Illinois Urbana-Champaign