Fine-Tuning of Neural Network Approximate MPC without Retraining via Bayesian Optimization

📅 2025-12-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Adaptive Model Predictive Control (AMPC) deployment suffers from poor practicality due to manual parameter tuning and repeated neural network retraining. Method: This paper proposes a data-driven, Bayesian optimization–based online adaptive method for AMPC parameter refinement—marking the first application of Bayesian optimization to AMPC parameter tuning. Crucially, it enables real-time hardware-in-the-loop adaptation without retraining the underlying neural network model. The approach eliminates subjectivity and counterintuitive behavior inherent in manual tuning, especially in high-dimensional systems or when the cost function is difficult to model analytically. Results: Hardware experiments on swing-up control of an inverted pendulum cart and yaw control of an underactuated unicycle demonstrate that the proposed method achieves significantly superior performance over baseline AMPC using only minimal experimental data, validating its efficiency and cross-task generalizability.

Technology Category

Application Category

📝 Abstract
Approximate model-predictive control (AMPC) aims to imitate an MPC's behavior with a neural network, removing the need to solve an expensive optimization problem at runtime. However, during deployment, the parameters of the underlying MPC must usually be fine-tuned. This often renders AMPC impractical as it requires repeatedly generating a new dataset and retraining the neural network. Recent work addresses this problem by adapting AMPC without retraining using approximated sensitivities of the MPC's optimization problem. Currently, this adaption must be done by hand, which is labor-intensive and can be unintuitive for high-dimensional systems. To solve this issue, we propose using Bayesian optimization to tune the parameters of AMPC policies based on experimental data. By combining model-based control with direct and local learning, our approach achieves superior performance to nominal AMPC on hardware, with minimal experimentation. This allows automatic and data-efficient adaptation of AMPC to new system instances and fine-tuning to cost functions that are difficult to directly implement in MPC. We demonstrate the proposed method in hardware experiments for the swing-up maneuver on an inverted cartpole and yaw control of an under-actuated balancing unicycle robot, a challenging control problem.
Problem

Research questions and friction points this paper is trying to address.

Fine-tuning AMPC parameters without retraining neural networks
Automating labor-intensive manual adaptation for high-dimensional systems
Enabling data-efficient adaptation to new system instances and cost functions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Bayesian optimization for AMPC parameter tuning
Model-based control combined with direct local learning
Automatic adaptation without neural network retraining
🔎 Similar Papers
No similar papers found.
Henrik Hose
Henrik Hose
RWTH Aachen University
P
Paul Brunzema
Institute for Data Science in Mechanical Engineering (DSME), RWTH Aachen University, Germany
Alexander von Rohr
Alexander von Rohr
TU Munich
Bayesian OptimizationReinforcement LearningControl Theory
A
Alexander Gräfe
Institute for Data Science in Mechanical Engineering (DSME), RWTH Aachen University, Germany
A
Angela P. Schoellig
Technical University of Munich, Germany
Sebastian Trimpe
Sebastian Trimpe
Professor, RWTH Aachen University
ControlMachine LearningNetworked SystemsRobotics