Fast Non-Episodic Adaptive Tuning of Robot Controllers with Online Policy Optimization

📅 2025-07-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of online parameter adaptation for robot controllers operating continuously along a single trajectory—where environment dynamics, policy structure, and optimization objectives are all unknown, time-varying, and lack state resets or trajectory segmentation. To this end, we propose M-GAPS, a model-based online policy optimization algorithm. M-GAPS innovatively integrates joint reparameterization of the state space and policy class to significantly improve the optimization landscape; it further combines geometric nonlinear controller modeling with efficient policy gradient estimation to achieve both high data efficiency and strong robustness against disturbances. Hardware experiments on quadrotor and Ackermann-steering ground vehicles demonstrate that M-GAPS converges faster and adapts more effectively than manually segmented baseline controllers. It enables real-time adaptation to severe dynamic disturbances—including strong wind gusts and sudden payload changes—thereby overcoming the dual limitations of conventional adaptive control (limited flexibility) and reinforcement learning (low sample efficiency).

Technology Category

Application Category

📝 Abstract
We study online algorithms to tune the parameters of a robot controller in a setting where the dynamics, policy class, and optimality objective are all time-varying. The system follows a single trajectory without episodes or state resets, and the time-varying information is not known in advance. Focusing on nonlinear geometric quadrotor controllers as a test case, we propose a practical implementation of a single-trajectory model-based online policy optimization algorithm, M-GAPS,along with reparameterizations of the quadrotor state space and policy class to improve the optimization landscape. In hardware experiments,we compare to model-based and model-free baselines that impose artificial episodes. We show that M-GAPS finds near-optimal parameters more quickly, especially when the episode length is not favorable. We also show that M-GAPS rapidly adapts to heavy unmodeled wind and payload disturbances, and achieves similar strong improvement on a 1:6-scale Ackermann-steered car. Our results demonstrate the hardware practicality of this emerging class of online policy optimization that offers significantly more flexibility than classic adaptive control, while being more stable and data-efficient than model-free reinforcement learning.
Problem

Research questions and friction points this paper is trying to address.

Tuning robot controller parameters under time-varying dynamics and objectives
Optimizing policies without episodic resets or prior knowledge of changes
Adapting rapidly to unmodeled disturbances like wind and payloads
Innovation

Methods, ideas, or system contributions that make the work stand out.

Online policy optimization without episodes
Reparameterized quadrotor state and policy
Adapts rapidly to unmodeled disturbances
🔎 Similar Papers
No similar papers found.