🤖 AI Summary
This work addresses the challenge of learning robot manipulation policies without robot-side demonstrations or online exploration. We propose a video distillation framework that enables zero-shot learning of image-conditioned, goal-directed robotic manipulation strategies directly from unlabeled in-the-wild human demonstration videos (e.g., EpicKitchens). Our method integrates semantic and geometric visual understanding, grasp feasibility estimation, and an image-conditioned imitation architecture to achieve end-to-end skill disentanglement and policy distillation. To our knowledge, this is the first approach enabling plug-and-play deployment across objects, scenes, and heterogeneous robot platforms—supporting six common kitchen tasks: opening/closing, pouring, grasping/placing, cutting, and stirring. We validate zero-shot success in both real-world kitchen environments and simulation, across two distinct robotic platforms. All trained policy checkpoints and the full toolchain are publicly released.
📝 Abstract
Many recent advances in robotic manipulation have come through imitation learning, yet these rely largely on mimicking a particularly hard-to-acquire form of demonstrations: those collected on the same robot in the same room with the same objects as the trained policy must handle at test time. In contrast, large pre-recorded human video datasets demonstrating manipulation skills in-the-wild already exist, which contain valuable information for robots. Is it possible to distill a repository of useful robotic skill policies out of such data without any additional requirements on robot-specific demonstrations or exploration? We present the first such system ZeroMimic, that generates immediately deployable image goal-conditioned skill policies for several common categories of manipulation tasks (opening, closing, pouring, pick&place, cutting, and stirring) each capable of acting upon diverse objects and across diverse unseen task setups. ZeroMimic is carefully designed to exploit recent advances in semantic and geometric visual understanding of human videos, together with modern grasp affordance detectors and imitation policy classes. After training ZeroMimic on the popular EpicKitchens dataset of ego-centric human videos, we evaluate its out-of-the-box performance in varied real-world and simulated kitchen settings with two different robot embodiments, demonstrating its impressive abilities to handle these varied tasks. To enable plug-and-play reuse of ZeroMimic policies on other task setups and robots, we release software and policy checkpoints of our skill policies.