🤖 AI Summary
This paper addresses the challenge of jointly modeling natural language instruction understanding and single-frame RGB-D perception in task-oriented grasping—characterized by poor generalization and weak semantic alignment. We propose the first open-vocabulary, semantics-driven universal grasping model. Methodologically, we fine-tune the Molmo vision-language model to jointly encode RGB-D features and open-vocabulary instructions; introduce PRISM, a large-scale synthetic dataset (379k samples) covering cluttered scenes and realistic task descriptions (e.g., “pour me some tea”); and achieve, for the first time, zero-shot, semantically correct bimanual grasp pose prediction. In real-world evaluation, our method attains a 70% task success rate—nearly double that of the strongest baseline (35%). We publicly release the dataset, model, code, and benchmark to advance research in open-vocabulary embodied intelligence.
📝 Abstract
We present GrasMolmo, a generalizable open-vocabulary task-oriented grasping (TOG) model. GraspMolmo predicts semantically appropriate, stable grasps conditioned on a natural language instruction and a single RGB-D frame. For instance, given"pour me some tea", GraspMolmo selects a grasp on a teapot handle rather than its body. Unlike prior TOG methods, which are limited by small datasets, simplistic language, and uncluttered scenes, GraspMolmo learns from PRISM, a novel large-scale synthetic dataset of 379k samples featuring cluttered environments and diverse, realistic task descriptions. We fine-tune the Molmo visual-language model on this data, enabling GraspMolmo to generalize to novel open-vocabulary instructions and objects. In challenging real-world evaluations, GraspMolmo achieves state-of-the-art results, with a 70% prediction success on complex tasks, compared to the 35% achieved by the next best alternative. GraspMolmo also successfully demonstrates the ability to predict semantically correct bimanual grasps zero-shot. We release our synthetic dataset, code, model, and benchmarks to accelerate research in task-semantic robotic manipulation, which, along with videos, are available at https://abhaybd.github.io/GraspMolmo/.