🤖 AI Summary
Behavior cloning policies often inherit the low-speed pacing of human demonstrations, hindering real-world deployment. Existing acceleration methods lack task-level semantic understanding and suffer from poor generalization. This paper proposes a semantic-aware demonstration downsampling framework: (1) it introduces the first integration of a vision-language model (VLM)–large language model (LLM) pipeline with 3D gripper–object relational modeling to enable fine-grained, semantically grounded task segmentation; (2) it applies aggressive downsampling exclusively during non-critical phases—requiring no policy retraining or architectural modification; and (3) it leverages dynamic time warping (DTW) to align semantic labels with dynamics features, ensuring cross-dataset generalizability. Evaluated in both simulation and on real robotic platforms, our method achieves approximately 2× execution speedup while preserving the original task success rate, effectively bridging the performance gap between human demonstrations and efficient robot control.
📝 Abstract
Behavior-cloning based visuomotor policies enable precise manipulation but often inherit the slow, cautious tempo of human demonstrations, limiting practical deployment. However, prior studies on acceleration methods mainly rely on statistical or heuristic cues that ignore task semantics and can fail across diverse manipulation settings. We present ESPADA, a semantic and spatially aware framework that segments demonstrations using a VLM-LLM pipeline with 3D gripper-object relations, enabling aggressive downsampling only in non-critical segments while preserving precision-critical phases, without requiring extra data or architectural modifications, or any form of retraining. To scale from a single annotated episode to the full dataset, ESPADA propagates segment labels via Dynamic Time Warping (DTW) on dynamics-only features. Across both simulation and real-world experiments with ACT and DP baselines, ESPADA achieves approximately a 2x speed-up while maintaining success rates, narrowing the gap between human demonstrations and efficient robot control.