Gentle Manipulation Policy Learning via Demonstrations from VLM Planned Atomic Skills

📅 2025-11-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenges of long-horizon, high-contact dexterous manipulation—namely, heavy reliance on real-world data and human demonstrations, and poor generalization—this paper proposes a human-demonstration-free end-to-end learning framework. Methodologically: (1) a vision-language model (VLM) enables high-level semantic task decomposition and atomic skill planning; (2) an explicit force-constrained reinforcement learning mechanism ensures physical interaction safety; (3) a diffusion-based policy network integrates visuo-tactile perception for robust, closed-loop control; and (4) knowledge distillation bridges sim-to-real transfer. Evaluated in both simulation and on physical robotic platforms, the approach significantly improves manipulation safety for fragile objects and cross-task generalization. It achieves higher multi-task success rates than state-of-the-art demonstration-free baselines.

Technology Category

Application Category

📝 Abstract
Autonomous execution of long-horizon, contact-rich manipulation tasks traditionally requires extensive real-world data and expert engineering, posing significant cost and scalability challenges. This paper proposes a novel framework integrating hierarchical semantic decomposition, reinforcement learning (RL), visual language models (VLMs), and knowledge distillation to overcome these limitations. Complex tasks are decomposed into atomic skills, with RL-trained policies for each primitive exclusively in simulation. Crucially, our RL formulation incorporates explicit force constraints to prevent object damage during delicate interactions. VLMs perform high-level task decomposition and skill planning, generating diverse expert demonstrations. These are distilled into a unified policy via Visual-Tactile Diffusion Policy for end-to-end execution. We conduct comprehensive ablation studies exploring different VLM-based task planners to identify optimal demonstration generation pipelines, and systematically compare imitation learning algorithms for skill distillation. Extensive simulation experiments and physical deployment validate that our approach achieves policy learning for long-horizon manipulation without costly human demonstrations, while the VLM-guided atomic skill framework enables scalable generalization to diverse tasks.
Problem

Research questions and friction points this paper is trying to address.

Learning manipulation policies without costly human demonstrations for long-horizon tasks
Overcoming scalability challenges in contact-rich manipulation using semantic decomposition
Ensuring safe object interaction through force-constrained reinforcement learning in simulation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hierarchical semantic decomposition via VLM planning
Reinforcement learning with explicit force constraints
Visual-Tactile Diffusion Policy for skill distillation
🔎 Similar Papers
No similar papers found.