Sketch-to-Skill: Bootstrapping Robot Learning with Human Drawn Trajectory Sketches

📅 2025-03-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Robot manipulation policy training typically relies heavily on expert demonstrations or extensive environment interactions, incurring high annotation and trial-and-error costs. Method: This paper proposes a sketch-driven, end-to-end skill initialization framework that takes only a user-drawn 2D trajectory sketch as input. A Sketch-to-3D trajectory generator maps the sketch to a feasible 3D manipulation trajectory; behavior cloning pretraining is combined with sketch-guided exploration to enable efficient reinforcement learning (RL) initialization. Contribution/Results: Unlike prior sketch-based interfaces limited to imitation learning or conditional policies, our approach enables cross-task generalization for the first time. Experiments show that using sketches alone achieves 96% of expert teleoperation performance and improves over standard RL by approximately 170%, substantially reducing dependence on expert knowledge and environmental interaction.

Technology Category

Application Category

📝 Abstract
Training robotic manipulation policies traditionally requires numerous demonstrations and/or environmental rollouts. While recent Imitation Learning (IL) and Reinforcement Learning (RL) methods have reduced the number of required demonstrations, they still rely on expert knowledge to collect high-quality data, limiting scalability and accessibility. We propose Sketch-to-Skill, a novel framework that leverages human-drawn 2D sketch trajectories to bootstrap and guide RL for robotic manipulation. Our approach extends beyond previous sketch-based methods, which were primarily focused on imitation learning or policy conditioning, limited to specific trained tasks. Sketch-to-Skill employs a Sketch-to-3D Trajectory Generator that translates 2D sketches into 3D trajectories, which are then used to autonomously collect initial demonstrations. We utilize these sketch-generated demonstrations in two ways: to pre-train an initial policy through behavior cloning and to refine this policy through RL with guided exploration. Experimental results demonstrate that Sketch-to-Skill achieves ~96% of the performance of the baseline model that leverages teleoperated demonstration data, while exceeding the performance of a pure reinforcement learning policy by ~170%, only from sketch inputs. This makes robotic manipulation learning more accessible and potentially broadens its applications across various domains.
Problem

Research questions and friction points this paper is trying to address.

Reduces need for expert demonstrations in robot training
Translates 2D sketches into 3D robotic trajectories
Enhances robotic manipulation learning accessibility and scalability
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses human-drawn 2D sketch trajectories
Translates sketches into 3D robot trajectories
Combines behavior cloning with reinforcement learning
🔎 Similar Papers
No similar papers found.