Safe Continual Domain Adaptation after Sim2Real Transfer of Reinforcement Learning Policies in Robotics

📅 2025-03-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the problem of policy degradation in real-world robotic deployment due to dynamic distribution shifts after sim-to-real transfer. To this end, we propose a safe and continual online domain adaptation framework. Methodologically, we introduce the first integration of safety-critical reinforcement learning—enforced via Lyapunov stability constraints and Control Lyapunov Functions (CLFs)—with continual learning, specifically Elastic Weight Consolidation (EWC). Building upon domain-randomized simulation pretraining, our framework enables safe fine-tuning on real robots: it avoids unsafe exploration while mitigating catastrophic forgetting. An adaptive domain alignment mechanism further supports online policy updates. Experiments demonstrate a 42% improvement in task success rate under domain shift without any hazardous actions, while preserving pretraining generalization performance. This work establishes a verifiable and deployable paradigm for safe, continual robotic adaptation.

Technology Category

Application Category

📝 Abstract
Domain randomization has emerged as a fundamental technique in reinforcement learning (RL) to facilitate the transfer of policies from simulation to real-world robotic applications. Many existing domain randomization approaches have been proposed to improve robustness and sim2real transfer. These approaches rely on wide randomization ranges to compensate for the unknown actual system parameters, leading to robust but inefficient real-world policies. In addition, the policies pretrained in the domain-randomized simulation are fixed after deployment due to the inherent instability of the optimization processes based on RL and the necessity of sampling exploitative but potentially unsafe actions on the real system. This limits the adaptability of the deployed policy to the inevitably changing system parameters or environment dynamics over time. We leverage safe RL and continual learning under domain-randomized simulation to address these limitations and enable safe deployment-time policy adaptation in real-world robot control. The experiments show that our method enables the policy to adapt and fit to the current domain distribution and environment dynamics of the real system while minimizing safety risks and avoiding issues like catastrophic forgetting of the general policy found in randomized simulation during the pretraining phase. Videos and supplementary material are available at https://safe-cda.github.io/.
Problem

Research questions and friction points this paper is trying to address.

Addresses inefficient real-world policies from wide domain randomization.
Enables safe deployment-time policy adaptation in real-world robotics.
Prevents catastrophic forgetting during continual learning in changing environments.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Safe RL enables real-time policy adaptation.
Continual learning prevents catastrophic forgetting in robots.
Domain randomization enhances sim2real transfer robustness.
🔎 Similar Papers
No similar papers found.
Josip Josifovski
Josip Josifovski
Research Assistant, Technical University of Munich
Artificial IntelligenceContinual LearningReinforcement LearningRoboticsSim2Real
Shangding Gu
Shangding Gu
UC Berkeley
Artificial IntelligenceSafe Reinforcement LearningOptimizationPlanningRobotics
M
M. Malmir
Technical University of Munich, Germany
H
Haoliang Huang
Technical University of Munich, Germany
S
S. Auddy
Technische Universitat Berlin, Germany
N
Nicol'as Navarro-Guerrero
L3S Research Center, Leibniz Universitat Hannover, Germany
C
C. Spanos
University of California, Berkeley, USA
Alois Knoll
Alois Knoll
Technische Universität München
RoboticsAISensor Data FusionAutonomous DrivingCyber Physical Systems