🤖 AI Summary
Robotic manipulation in real-world settings—such as homes and factories—demands simultaneous guarantees of reliability, efficiency, and generalization. To address this, we propose a reinforcement learning framework integrating diffusion models with a three-stage learning pipeline. First, a multimodal diffusion-based visuomotor policy is pre-trained on diverse sensorimotor data. Second, an offline policy evaluation gating mechanism filters high-quality trajectories to initialize iterative offline PPO optimization. Third, online fine-tuning is combined with lightweight consistency-aware knowledge distillation to enable high-frequency (single-step) control while generating conservative, robust policies. The framework supports cross-platform deployment and multi-task transfer. Evaluated on seven real-robot tasks, it achieves perfect success (900/900), sustains uninterrupted operation for over two hours, and matches or exceeds human teleoperation performance in operational efficiency.
📝 Abstract
Real-world robotic manipulation in homes and factories demands reliability, efficiency, and robustness that approach or surpass skilled human operators. We present RL-100, a real-world reinforcement learning training framework built on diffusion visuomotor policies trained bu supervised learning. RL-100 introduces a three-stage pipeline. First, imitation learning leverages human priors. Second, iterative offline reinforcement learning uses an Offline Policy Evaluation procedure, abbreviated OPE, to gate PPO-style updates that are applied in the denoising process for conservative and reliable improvement. Third, online reinforcement learning eliminates residual failure modes. An additional lightweight consistency distillation head compresses the multi-step sampling process in diffusion into a single-step policy, enabling high-frequency control with an order-of-magnitude reduction in latency while preserving task performance. The framework is task-, embodiment-, and representation-agnostic and supports both 3D point clouds and 2D RGB inputs, a variety of robot platforms, and both single-step and action-chunk policies. We evaluate RL-100 on seven real-robot tasks spanning dynamic rigid-body control, such as Push-T and Agile Bowling, fluids and granular pouring, deformable cloth folding, precise dexterous unscrewing, and multi-stage orange juicing. RL-100 attains 100% success across evaluated trials for a total of 900 out of 900 episodes, including up to 250 out of 250 consecutive trials on one task. The method achieves near-human teleoperation or better time efficiency and demonstrates multi-hour robustness with uninterrupted operation lasting up to two hours.