ESPADA: Execution Speedup via Semantics Aware Demonstration Data Downsampling for Imitation Learning

📅 2025-12-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Behavior cloning policies often inherit the low-speed pacing of human demonstrations, hindering real-world deployment. Existing acceleration methods lack task-level semantic understanding and suffer from poor generalization. This paper proposes a semantic-aware demonstration downsampling framework: (1) it introduces the first integration of a vision-language model (VLM)–large language model (LLM) pipeline with 3D gripper–object relational modeling to enable fine-grained, semantically grounded task segmentation; (2) it applies aggressive downsampling exclusively during non-critical phases—requiring no policy retraining or architectural modification; and (3) it leverages dynamic time warping (DTW) to align semantic labels with dynamics features, ensuring cross-dataset generalizability. Evaluated in both simulation and on real robotic platforms, our method achieves approximately 2× execution speedup while preserving the original task success rate, effectively bridging the performance gap between human demonstrations and efficient robot control.

Technology Category

Application Category

📝 Abstract
Behavior-cloning based visuomotor policies enable precise manipulation but often inherit the slow, cautious tempo of human demonstrations, limiting practical deployment. However, prior studies on acceleration methods mainly rely on statistical or heuristic cues that ignore task semantics and can fail across diverse manipulation settings. We present ESPADA, a semantic and spatially aware framework that segments demonstrations using a VLM-LLM pipeline with 3D gripper-object relations, enabling aggressive downsampling only in non-critical segments while preserving precision-critical phases, without requiring extra data or architectural modifications, or any form of retraining. To scale from a single annotated episode to the full dataset, ESPADA propagates segment labels via Dynamic Time Warping (DTW) on dynamics-only features. Across both simulation and real-world experiments with ACT and DP baselines, ESPADA achieves approximately a 2x speed-up while maintaining success rates, narrowing the gap between human demonstrations and efficient robot control.
Problem

Research questions and friction points this paper is trying to address.

Accelerates slow human-demonstrated robot policies via semantic-aware downsampling
Segments demonstrations using VLM-LLM pipeline with 3D gripper-object relations
Propagates labels via DTW to scale from single annotation to full dataset
Innovation

Methods, ideas, or system contributions that make the work stand out.

Semantic-aware demonstration segmentation via VLM-LLM pipeline
Aggressive downsampling in non-critical segments for speedup
Segment label propagation using Dynamic Time Warping on dynamics features
🔎 Similar Papers
No similar papers found.
Byungju Kim
Byungju Kim
Amazon
machine learningdeep learning
J
Jinu Pahk
Tommoro Robotics
C
Chungwoo Lee
Tommoro Robotics
J
Jaejoon Kim
Tommoro Robotics
J
Jangha Lee
Tommoro Robotics
T
Theo Taeyeong Kim
Department of Computer Science and Engineering, Seoul National University
K
Kyuhwan Shim
Interdisciplinary Program in Artificial Intelligence, Seoul National University
Jun Ki Lee
Jun Ki Lee
Associate Research Professor, Seoul National University AI Institute
Artificial IntelligenceRoboticsTask and Motion PlanningTeleoperationSocial Robots
Byoung-Tak Zhang
Byoung-Tak Zhang
Professor of Computer Science, Cognitive Science, and Brain Science, Seoul National University
Machine LearningArtificial IntelligenceCognitive Science