🤖 AI Summary
Natural language instructions for robotic manipulation suffer from semantic ambiguity and polysemy, leading to poor policy generalization and limited interpretability. Method: We propose a vision-language model (VLM)-driven code-generation–attention-enhanced diffusion policy framework. First, a VLM parses ambiguous instructions into structured, executable Python code as a semantic intermediate representation. Second, this code conditions a diffusion-based policy augmented with a 3D spatial attention mechanism to generate high-fidelity action sequences. Contribution/Results: Our approach decouples semantic understanding from action generation and explicitly models spatial interactions via 3D attention. Experiments demonstrate significant improvements over state-of-the-art imitation learning methods on tasks involving linguistic ambiguity, contact-rich manipulation, and multi-object interaction—achieving superior cross-environment generalization and ambiguity resolution.
📝 Abstract
Natural language instructions for robotic manipulation tasks often exhibit ambiguity and vagueness. For instance, the instruction"Hang a mug on the mug tree"may involve multiple valid actions if there are several mugs and branches to choose from. Existing language-conditioned policies typically rely on end-to-end models that jointly handle high-level semantic understanding and low-level action generation, which can result in suboptimal performance due to their lack of modularity and interpretability. To address these challenges, we introduce a novel robotic manipulation framework that can accomplish tasks specified by potentially ambiguous natural language. This framework employs a Vision-Language Model (VLM) to interpret abstract concepts in natural language instructions and generates task-specific code - an interpretable and executable intermediate representation. The generated code interfaces with the perception module to produce 3D attention maps that highlight task-relevant regions by integrating spatial and semantic information, effectively resolving ambiguities in instructions. Through extensive experiments, we identify key limitations of current imitation learning methods, such as poor adaptation to language and environmental variations. We show that our approach excels across challenging manipulation tasks involving language ambiguity, contact-rich manipulation, and multi-object interactions.