🤖 AI Summary
This work addresses the challenge of deploying reinforcement learning in real-world settings, where catastrophic failures—such as spilling water or breaking glass—often hinder practical application. The authors propose a novel failure-aware offline-to-online reinforcement learning paradigm that proactively prevents and autonomously recovers from such failures by leveraging an offline-trained safety critic and recovery policy during online execution. Central to this approach is the introduction of FailureBench, the first benchmark dedicated to intervention-based failures, alongside a unified framework integrating world model–based safety assessment and offline recovery strategies. Evaluated in both simulation and real-robot experiments, the method reduces intervention-triggering failures by 73.1% and improves average task performance by 11.3%, substantially enhancing policy generalization and reliability.
📝 Abstract
Post-training algorithms based on deep reinforcement learning can push the limits of robotic models for specific objectives, such as generalizability, accuracy, and robustness. However, Intervention-requiring Failures (IR Failures) (e.g., a robot spilling water or breaking fragile glass) during real-world exploration happen inevitably, hindering the practical deployment of such a paradigm. To tackle this, we introduce Failure-Aware Offline-to-Online Reinforcement Learning (FARL), a new paradigm minimizing failures during real-world reinforcement learning. We create FailureBench, a benchmark that incorporates common failure scenarios requiring human intervention, and propose an algorithm that integrates a world-model-based safety critic and a recovery policy trained offline to prevent failures during online exploration. Extensive simulation and real-world experiments demonstrate the effectiveness of FARL in significantly reducing IR Failures while improving performance and generalization during online reinforcement learning post-training. FARL reduces IR Failures by 73.1% while elevating performance by 11.3% on average during real-world RL post-training. Videos and code are available at https://failure-aware-rl.github.io.