R2DN: Scalable Parameterization of Contracting and Lipschitz Recurrent Deep Networks

📅 2025-04-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses critical limitations of recurrent neural networks (RNNs) in data-driven control—namely, poor scalability, lack of guaranteed stability, and insufficient robustness to input perturbations. To this end, we propose the Robust Recurrent Deep Network (R²DN), a novel architecture that intrinsically ensures contraction and input-to-state robustness via feedback interconnection between a linear time-invariant system and a 1-Lipschitz feedforward network. R²DN introduces, for the first time, a scalable parameterization that eliminates iterative equilibrium-point computation; it directly enforces Lipschitz constraints through structured weight design and leverages GPU-optimized forward/backward propagation. Experiments demonstrate up to 10× faster training and inference over RENs, enabling larger models, batch sizes, and sequence lengths. R²DN achieves competitive performance—comparable to state-of-the-art methods—in nonlinear system identification, state observer design, and learning-based feedback control, while offering superior practicality and deployment efficiency.

Technology Category

Application Category

📝 Abstract
This paper presents the Robust Recurrent Deep Network (R2DN), a scalable parameterization of robust recurrent neural networks for machine learning and data-driven control. We construct R2DNs as a feedback interconnection of a linear time-invariant system and a 1-Lipschitz deep feedforward network, and directly parameterize the weights so that our models are stable (contracting) and robust to small input perturbations (Lipschitz) by design. Our parameterization uses a structure similar to the previously-proposed recurrent equilibrium networks (RENs), but without the requirement to iteratively solve an equilibrium layer at each time-step. This speeds up model evaluation and backpropagation on GPUs, and makes it computationally feasible to scale up the network size, batch size, and input sequence length in comparison to RENs. We compare R2DNs to RENs on three representative problems in nonlinear system identification, observer design, and learning-based feedback control and find that training and inference are both up to an order of magnitude faster with similar test set performance, and that training/inference times scale more favorably with respect to model expressivity.
Problem

Research questions and friction points this paper is trying to address.

Scalable parameterization of robust recurrent neural networks
Ensures model stability and robustness to input perturbations
Improves training and inference speed compared to RENs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Feedback interconnection of LTI and Lipschitz network
Direct parameterization for stability and robustness
Faster evaluation without iterative equilibrium solving
🔎 Similar Papers
No similar papers found.
N
Nicholas H. Barbara
Australian Centre for Robotics (ACFR) and the School of Aerospace, Mechanical and Mechatronic Engineering, The University of Sydney, Australia
R
Ruigang Wang
Australian Centre for Robotics (ACFR) and the School of Aerospace, Mechanical and Mechatronic Engineering, The University of Sydney, Australia
Ian R. Manchester
Ian R. Manchester
Professor, University of Sydney. Director, ACFR. Director, ARIAM Hub.
roboticsnonlinear controlsystem identificationrobust machine learning