🤖 AI Summary
To address planning and control failures in robotic manipulation caused by execution noise, incomplete perception, and model mismatch, this paper proposes RecoveryChaining, a hierarchical reinforcement learning framework. Methodologically, it introduces (1) a hybrid action space integrating multiple pre-trained model-based controllers as selectable “nominal actions,” augmented with local recovery policies; (2) autonomous decision-making—under sparse rewards—on when to recover, how to recover, and which controller to invoke, jointly optimizing fault recovery and task continuity; and (3) seamless integration with model predictive control (MPC) and support for sim-to-real transfer. Evaluated on three multi-step, sparse-reward manipulation tasks, RecoveryChaining significantly outperforms baseline methods and demonstrates robust real-world deployment on a physical robotic arm, validating its effectiveness, robustness, and practical applicability.
📝 Abstract
Model-based planners and controllers are commonly used to solve complex manipulation problems as they can efficiently optimize diverse objectives and generalize to long horizon tasks. However, they often fail during deployment due to noisy actuation, partial observability and imperfect models. To enable a robot to recover from such failures, we propose to use hierarchical reinforcement learning to learn a recovery policy. The recovery policy is triggered when a failure is detected based on sensory observations and seeks to take the robot to a state from which it can complete the task using the nominal model-based controllers. Our approach, called RecoveryChaining, uses a hybrid action space, where the model-based controllers are provided as additional emph{nominal} options which allows the recovery policy to decide how to recover, when to switch to a nominal controller and which controller to switch to even with emph{sparse rewards}. We evaluate our approach in three multi-step manipulation tasks with sparse rewards, where it learns significantly more robust recovery policies than those learned by baselines. We successfully transfer recovery policies learned in simulation to a physical robot to demonstrate the feasibility of sim-to-real transfer with our method.