🤖 AI Summary
Ambiguous language prompts and redundant visual cues in robotic manipulation hinder clear task intent specification. Method: This paper proposes an object-centric vision-language-action joint model featuring a novel, interpretable 2D visual prompting mechanism: geometric symbols—such as end-effector pose and motion direction—are overlaid directly onto RGB images to explicitly encode SE(3) spatial contact relationships and high-level planning intent. The model employs a multimodal fusion architecture, an SE(3)-aware action prediction head, and a keyframe-sequenced execution strategy, supporting both human-provided and automated prompt generation. Contribution/Results: Evaluated in simulation and on real robotic platforms, the approach significantly improves cross-task generalization and long-horizon robustness; multi-step dexterous manipulation success rates increase by 27% over baseline methods.
📝 Abstract
In robotic, task goals can be conveyed through various modalities, such as language, goal images, and goal videos. However, natural language can be ambiguous, while images or videos may offer overly detailed specifications. To tackle these challenges, we introduce CrayonRobo that leverages comprehensive multi-modal prompts that explicitly convey both low-level actions and high-level planning in a simple manner. Specifically, for each key-frame in the task sequence, our method allows for manual or automatic generation of simple and expressive 2D visual prompts overlaid on RGB images. These prompts represent the required task goals, such as the end-effector pose and the desired movement direction after contact. We develop a training strategy that enables the model to interpret these visual-language prompts and predict the corresponding contact poses and movement directions in SE(3) space. Furthermore, by sequentially executing all key-frame steps, the model can complete long-horizon tasks. This approach not only helps the model explicitly understand the task objectives but also enhances its robustness on unseen tasks by providing easily interpretable prompts. We evaluate our method in both simulated and real-world environments, demonstrating its robust manipulation capabilities.