๐ค AI Summary
This work addresses the challenge of dynamically allocating control authority between neural networkโbased learning of model uncertainty and model predictive control (MPC) for constraint enforcement. We propose a dynamic co-design framework wherein a deep neural network learns unmodeled system dynamics online, while MPC ensures satisfaction of state and input constraints and guarantees closed-loop safety; their outputs are fused in real time via a learnable weighting mechanism that governs responsibility allocation. Our key contributions include: (i) a novel online weight adaptation strategy that jointly ensures Lyapunov-based stability and learning adaptability, and (ii) support for lightweight, data-driven neural network fine-tuning during operation. Numerical experiments demonstrate substantial improvements in trajectory tracking accuracy and closed-loop stability during learning transients, effectively mitigating performance degradation inherent in conventional serial or hard-switching architectures.
๐ Abstract
Deep Model Predictive Control (Deep MPC) is an evolving field that integrates model predictive control and deep learning. This manuscript is focused on a particular approach, which employs deep neural network in the loop with MPC. This class of approaches distributes control authority between a neural network and an MPC controller, in such a way that the neural network learns the model uncertainties while the MPC handles constraints. The approach is appealing because training data collected while the system is in operation can be used to fine-tune the neural network, and MPC prevents unsafe behavior during those learning transients. This manuscript explains implementation challenges of Deep MPC, algorithmic way to distribute control authority and argues that a poor choice in distributing control authority may lead to poor performance. A reason of poor performance is explained through a numerical experiment on a four-wheeled skid-steer dynamics.