🤖 AI Summary
Existing vision-based imitation learning methods rely on predefined motion primitives for physical interaction, limiting generalization and precision. This paper introduces FMimic: the first end-to-end framework that directly leverages foundation models—specifically vision-language models (VLMs)—for fine-grained action skill learning, requiring only a small number of human demonstration videos to achieve high-fidelity imitation while eliminating dependence on handcrafted action primitives. Its core innovation lies in transferring VLMs’ cross-modal semantic understanding to low-level action modeling, enabling long-horizon planning and sub-centimeter manipulation. On the RLBench multi-task benchmark, FMimic achieves a 39% absolute improvement in success rate; on real-robot experiments, it improves success by 29%. Moreover, it surpasses state-of-the-art baselines by 34% on high-precision subtasks and by 47% on long-horizon subtasks.
📝 Abstract
Visual imitation learning (VIL) provides an efficient and intuitive strategy for robotic systems to acquire novel skills. Recent advancements in foundation models, particularly Vision Language Models (VLMs), have demonstrated remarkable capabilities in visual and linguistic reasoning for VIL tasks. Despite this progress, existing approaches primarily utilize these models for learning high-level plans from human demonstrations, relying on pre-defined motion primitives for executing physical interactions, which remains a major bottleneck for robotic systems. In this work, we present FMimic, a novel paradigm that harnesses foundation models to directly learn generalizable skills at even fine-grained action levels, using only a limited number of human videos. Extensive experiments demonstrate that our FMimic delivers strong performance with a single human video, and significantly outperforms all other methods with five videos. Furthermore, our method exhibits significant improvements of over 39% and 29% in RLBench multi-task experiments and real-world manipulation tasks, respectively, and exceeds baselines by more than 34% in high-precision tasks and 47% in long-horizon tasks.