🤖 AI Summary
To address the poor initial performance and inefficient online optimization of diffusion-based policies in open-world settings, this paper proposes Diffusion Space Reinforcement Learning (DSRL): a reinforcement learning framework that operates directly on the implicit noise space of diffusion policies—without requiring gradient access, model weight updates, or additional demonstration data—enabling black-box online policy improvement. DSRL synergistically combines the generalization capability of behavior cloning with the adaptability of RL, reducing interaction samples by over 90% compared to standard RL on both simulated and real-robot tasks. It further supports zero-shot, multi-task transfer optimization of pre-trained general-purpose diffusion policies. The core contribution is the first establishment of a differentiable connection between the implicit noise space of diffusion policies and policy gradient estimation, enabling fine-tuning-free, highly sample-efficient autonomous optimization.
📝 Abstract
Robotic control policies learned from human demonstrations have achieved impressive results in many real-world applications. However, in scenarios where initial performance is not satisfactory, as is often the case in novel open-world settings, such behavioral cloning (BC)-learned policies typically require collecting additional human demonstrations to further improve their behavior -- an expensive and time-consuming process. In contrast, reinforcement learning (RL) holds the promise of enabling autonomous online policy improvement, but often falls short of achieving this due to the large number of samples it typically requires. In this work we take steps towards enabling fast autonomous adaptation of BC-trained policies via efficient real-world RL. Focusing in particular on diffusion policies -- a state-of-the-art BC methodology -- we propose diffusion steering via reinforcement learning (DSRL): adapting the BC policy by running RL over its latent-noise space. We show that DSRL is highly sample efficient, requires only black-box access to the BC policy, and enables effective real-world autonomous policy improvement. Furthermore, DSRL avoids many of the challenges associated with finetuning diffusion policies, obviating the need to modify the weights of the base policy at all. We demonstrate DSRL on simulated benchmarks, real-world robotic tasks, and for adapting pretrained generalist policies, illustrating its sample efficiency and effective performance at real-world policy improvement.