Speech-to-Trajectory: Learning Human-Like Verbal Guidance for Robot Motion

📅 2025-04-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses three key limitations in natural language instruction-driven robotic control: poor generalization across linguistic variations, inconsistent behavioral execution, and overreliance on predefined command vocabularies. We propose the Instruction Language Model (DLM), an end-to-end trajectory generation framework integrating semantic-enhanced imitation learning with diffusion-based policy modeling. Methodologically, DLM combines behavior cloning, GPT-driven semantic paraphrasing for data augmentation, diffusion policy training, and human-guided motion data from simulation. Its core innovation lies in the first joint modeling of semantic enhancement and diffusion policies—thereby jointly optimizing for generalization, deterministic execution, and cross-platform embodiment-agnostic adaptability. Experiments demonstrate that DLM significantly improves robustness to diverse spoken-language expressions, reduces dependence on structured instructions, and generates human-like, naturalistic, predictable, and real-time executable motion trajectories.

Technology Category

Application Category

📝 Abstract
Full integration of robots into real-life applications necessitates their ability to interpret and execute natural language directives from untrained users. Given the inherent variability in human language, equivalent directives may be phrased differently, yet require consistent robot behavior. While Large Language Models (LLMs) have advanced language understanding, they often falter in handling user phrasing variability, rely on predefined commands, and exhibit unpredictable outputs. This letter introduces the Directive Language Model (DLM), a novel speech-to-trajectory framework that directly maps verbal commands to executable motion trajectories, bypassing predefined phrases. DLM utilizes Behavior Cloning (BC) on simulated demonstrations of human-guided robot motion. To enhance generalization, GPT-based semantic augmentation generates diverse paraphrases of training commands, labeled with the same motion trajectory. DLM further incorporates a diffusion policy-based trajectory generation for adaptive motion refinement and stochastic sampling. In contrast to LLM-based methods, DLM ensures consistent, predictable motion without extensive prompt engineering, facilitating real-time robotic guidance. As DLM learns from trajectory data, it is embodiment-agnostic, enabling deployment across diverse robotic platforms. Experimental results demonstrate DLM's improved command generalization, reduced dependence on structured phrasing, and achievement of human-like motion.
Problem

Research questions and friction points this paper is trying to address.

Mapping verbal commands to robot motion trajectories
Handling variability in human language phrasing
Ensuring consistent robot behavior without predefined commands
Innovation

Methods, ideas, or system contributions that make the work stand out.

DLM maps verbal commands to motion trajectories directly
GPT-based semantic augmentation enhances command generalization
Diffusion policy enables adaptive motion refinement and sampling
🔎 Similar Papers
No similar papers found.
E
Eran Beeri Bamani
School of Mechanical Engineering, Tel-Aviv University, Israel
E
Eden Nissinman
School of Mechanical Engineering, Tel-Aviv University, Israel
R
Rotem Atari
School of Mechanical Engineering, Tel-Aviv University, Israel
N
Nevo Heimann Saadon
School of Mechanical Engineering, Tel-Aviv University, Israel
Avishai Sintov
Avishai Sintov
Tel-Aviv University
Robotics