🤖 AI Summary
This work addresses the challenge of safe policy initialization and efficient optimization for image-driven vacuum gripper manipulation, given only 50 suboptimal human demonstrations—where behavioral cloning underperforms and conventional online reinforcement learning (RL) risks mechanical damage due to unsafe random exploration. We propose the first offline-to-online RL framework integrating a Neural Tangent Kernel (NTK)-inspired regularization mechanism—replacing target networks—to ensure safe policy initialization and stable fine-tuning. Our “offline pretraining + online fine-tuning” architecture achieves >90% grasp success on a real robot with just two hours of closed-loop interaction, significantly outperforming behavioral cloning and leading RL baselines. Key contributions include: (i) theoretical grounding of NTK regularization for sample-efficient adaptation from minimal demonstrations; (ii) a physics-aware offline-online co-design paradigm prioritizing hardware safety; and (iii) empirical validation of efficient, robust deployment on real robotic hardware.
📝 Abstract
Offline-to-online reinforcement learning (O2O RL) aims to obtain a continually improving policy as it interacts with the environment, while ensuring the initial policy behaviour is satisficing. This satisficing behaviour is necessary for robotic manipulation where random exploration can be costly due to catastrophic failures and time. O2O RL is especially compelling when we can only obtain a scarce amount of (potentially suboptimal) demonstrations$unicode{x2014}$a scenario where behavioural cloning (BC) is known to suffer from distribution shift. Previous works have outlined the challenges in applying O2O RL algorithms under the image-based environments. In this work, we propose a novel O2O RL algorithm that can learn in a real-life image-based robotic vacuum grasping task with a small number of demonstrations where BC fails majority of the time. The proposed algorithm replaces the target network in off-policy actor-critic algorithms with a regularization technique inspired by neural tangent kernel. We demonstrate that the proposed algorithm can reach above 90% success rate in under two hours of interaction time, with only 50 human demonstrations, while BC and existing commonly-used RL algorithms fail to achieve similar performance.