🤖 AI Summary
Standard behavior cloning (BC) pretraining often impedes subsequent reinforcement learning (RL) fine-tuning due to insufficient action-space coverage, leading to poor convergence. Method: We propose Posterior Behavior Cloning (PostBC), the first approach to theoretically characterize BC’s inherent lack of action-space coverage guarantees. PostBC leverages conditional generative models—such as diffusion models or variational autoencoders—to explicitly model the action posterior distribution conditioned on demonstration states, thereby enhancing both differentiability for RL fine-tuning and foundational exploration capability—while preserving pretraining performance. Crucially, it requires only supervised learning, with no additional RL or adversarial training. Contribution/Results: Across simulated and real-robot manipulation tasks, PostBC improves RL fine-tuning sample efficiency by up to 2.3× over standard BC, while achieving superior final policy performance and deployment stability.
📝 Abstract
Standard practice across domains from robotics to language is to first pretrain a policy on a large-scale demonstration dataset, and then finetune this policy, typically with reinforcement learning (RL), in order to improve performance on deployment domains. This finetuning step has proved critical in achieving human or super-human performance, yet while much attention has been given to developing more effective finetuning algorithms, little attention has been given to ensuring the pretrained policy is an effective initialization for RL finetuning. In this work we seek to understand how the pretrained policy affects finetuning performance, and how to pretrain policies in order to ensure they are effective initializations for finetuning. We first show theoretically that standard behavioral cloning (BC) -- which trains a policy to directly match the actions played by the demonstrator -- can fail to ensure coverage over the demonstrator's actions, a minimal condition necessary for effective RL finetuning. We then show that if, instead of exactly fitting the observed demonstrations, we train a policy to model the posterior distribution of the demonstrator's behavior given the demonstration dataset, we do obtain a policy that ensures coverage over the demonstrator's actions, enabling more effective finetuning. Furthermore, this policy -- which we refer to as the posterior behavioral cloning (PostBC) policy -- achieves this while ensuring pretrained performance is no worse than that of the BC policy. We then show that PostBC is practically implementable with modern generative models in robotic control domains -- relying only on standard supervised learning -- and leads to significantly improved RL finetuning performance on both realistic robotic control benchmarks and real-world robotic manipulation tasks, as compared to standard behavioral cloning.