🤖 AI Summary
Imitation learning for robotic manipulation suffers from insufficient diversity in spatial configurations: existing demonstration trajectories are predominantly collected in static settings—featuring fixed object poses, target locations, and camera viewpoints—leading to poor spatial generalization. To address this, we propose MOtion-Based Variability Enhancement (MOVE), a novel data augmentation strategy that introduces controlled, active motion of movable objects within a single demonstration trajectory. This induces dense, continuous variations in spatial configurations implicitly, thereby breaking the static data collection paradigm. MOVE is integrated into imitation learning frameworks both in simulation and on real robots, enabling joint training for dynamic data augmentation and spatial generalization. Experiments demonstrate that MOVE improves average task success rate by 76.1 percentage points (reaching 39.1%) in simulation, enhances data efficiency by 2–5× on selected tasks, and significantly boosts generalization to unseen spatial configurations.
📝 Abstract
Imitation learning method has shown immense promise for robotic manipulation, yet its practical deployment is fundamentally constrained by the data scarcity. Despite prior work on collecting large-scale datasets, there still remains a significant gap to robust spatial generalization. We identify a key limitation: individual trajectories, regardless of their length, are typically collected from a emph{single, static spatial configuration} of the environment. This includes fixed object and target spatial positions as well as unchanging camera viewpoints, which significantly restricts the diversity of spatial information available for learning. To address this critical bottleneck in data efficiency, we propose extbf{MOtion-Based Variability Enhancement} (emph{MOVE}), a simple yet effective data collection paradigm that enables the acquisition of richer spatial information from dynamic demonstrations. Our core contribution is an augmentation strategy that injects motion into any movable objects within the environment for each demonstration. This process implicitly generates a dense and diverse set of spatial configurations within a single trajectory. We conduct extensive experiments in both simulation and real-world environments to validate our approach. For example, in simulation tasks requiring strong spatial generalization, emph{MOVE} achieves an average success rate of 39.1%, a 76.1% relative improvement over the static data collection paradigm (22.2%), and yields up to 2--5$ imes$ gains in data efficiency on certain tasks. Our code is available at https://github.com/lucywang720/MOVE.