Learning on the Fly: Rapid Policy Adaptation via Differentiable Simulation

📅 2025-08-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address performance degradation in robot sim-to-real transfer caused by unmodeled dynamics and environmental disturbances, this paper proposes an online adaptive control framework based on differentiable simulation. Our method unifies residual dynamics modeling and real-time policy optimization within a differentiable simulation loop: gradients computed via backpropagation integrate real-world measurements to continuously refine the dynamics model and update the control policy—without offline retraining. The approach achieves rapid adaptation to unknown disturbances within 5 seconds. In quadrotor flight control, it reduces hover error by 81% compared to L1-MPC and by 55% compared to DATT. Moreover, it demonstrates strong robustness in complex scenarios such as vision-based navigation. Key contributions include: (i) a gradient-driven online adaptation mechanism that bridges simulation and reality; (ii) joint co-adaptation of dynamics model and policy within a single differentiable pipeline; and (iii) empirical validation of real-time adaptability and generalization across diverse aerial control tasks.

Technology Category

Application Category

📝 Abstract
Learning control policies in simulation enables rapid, safe, and cost-effective development of advanced robotic capabilities. However, transferring these policies to the real world remains difficult due to the sim-to-real gap, where unmodeled dynamics and environmental disturbances can degrade policy performance. Existing approaches, such as domain randomization and Real2Sim2Real pipelines, can improve policy robustness, but either struggle under out-of-distribution conditions or require costly offline retraining. In this work, we approach these problems from a different perspective. Instead of relying on diverse training conditions before deployment, we focus on rapidly adapting the learned policy in the real world in an online fashion. To achieve this, we propose a novel online adaptive learning framework that unifies residual dynamics learning with real-time policy adaptation inside a differentiable simulation. Starting from a simple dynamics model, our framework refines the model continuously with real-world data to capture unmodeled effects and disturbances such as payload changes and wind. The refined dynamics model is embedded in a differentiable simulation framework, enabling gradient backpropagation through the dynamics and thus rapid, sample-efficient policy updates beyond the reach of classical RL methods like PPO. All components of our system are designed for rapid adaptation, enabling the policy to adjust to unseen disturbances within 5 seconds of training. We validate the approach on agile quadrotor control under various disturbances in both simulation and the real world. Our framework reduces hovering error by up to 81% compared to L1-MPC and 55% compared to DATT, while also demonstrating robustness in vision-based control without explicit state estimation.
Problem

Research questions and friction points this paper is trying to address.

Bridging the sim-to-real gap for robotic policy transfer
Addressing unmodeled dynamics and environmental disturbances
Enabling rapid online adaptation to out-of-distribution conditions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Online adaptive learning with differentiable simulation
Real-time residual dynamics learning from data
Gradient-based rapid policy updates via backpropagation
🔎 Similar Papers
No similar papers found.