🤖 AI Summary
This work addresses the significant performance degradation of reinforcement learning policies in contact-rich tasks—such as pushing and pick-and-place—when test environments diverge from the training distribution. To mitigate this issue, the authors propose a hybrid control architecture that integrates Deep Deterministic Policy Gradient (DDPG) with Bounded Extremum Seeking (Bounded ES). In this framework, DDPG provides an efficient initial policy, while Bounded ES enables online, adaptive compensation for out-of-distribution perturbations during deployment, including variations in target positions and friction coefficients. Experimental results demonstrate that the proposed approach substantially outperforms pure reinforcement learning baselines across multiple scenarios involving dynamic distribution shifts, achieving higher robustness and task success rates.
📝 Abstract
Reinforcement learning has shown strong performance in robotic manipulation, but learned policies often degrade in performance when test conditions differ from the training distribution. This limitation is especially important in contact-rich tasks such as pushing and pick-and-place, where changes in goals, contact conditions, or robot dynamics can drive the system out-of-distribution at inference time. In this paper, we investigate a hybrid controller that combines reinforcement learning with bounded extremum seeking to improve robustness under such conditions. In the proposed approach, deep deterministic policy gradient (DDPG) policies are trained under standard conditions on the robotic pushing and pick-and-place tasks, and are then combined with bounded ES during deployment. The RL policy provides fast manipulation behavior, while bounded ES ensures robustness of the overall controller to time variations when operating conditions depart from those seen during training. The resulting controller is evaluated under several out-of-distribution settings, including time-varying goals and spatially varying friction patches.