Gondola: Grounded Vision Language Planning for Generalizable Robotic Manipulation

📅 2025-06-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Generalizing robotic manipulation to unseen objects, environments, and diverse language instructions remains challenging—particularly due to single-view visual input and imprecise visual grounding. To address this, we propose a multi-view embodied vision-language planning model featuring a novel planning architecture that supports interleaved text–mask outputs, integrates historical plans with multi-view images, and incorporates semantic segmentation-guided action generation. Our method is trained via LLM-based visual instruction tuning on three synthetic datasets covering novel object placements, rigid bodies, articulated objects, and long-horizon tasks. This significantly improves visual grounding accuracy. Evaluated on GemBench’s four generalization benchmarks, our approach outperforms all prior LLM-based methods, achieving state-of-the-art performance. It is the first to enable end-to-end embodied planning jointly optimized for multi-view perception and precise object localization.

Technology Category

Application Category

📝 Abstract
Robotic manipulation faces a significant challenge in generalizing across unseen objects, environments and tasks specified by diverse language instructions. To improve generalization capabilities, recent research has incorporated large language models (LLMs) for planning and action execution. While promising, these methods often fall short in generating grounded plans in visual environments. Although efforts have been made to perform visual instructional tuning on LLMs for robotic manipulation, existing methods are typically constrained by single-view image input and struggle with precise object grounding. In this work, we introduce Gondola, a novel grounded vision-language planning model based on LLMs for generalizable robotic manipulation. Gondola takes multi-view images and history plans to produce the next action plan with interleaved texts and segmentation masks of target objects and locations. To support the training of Gondola, we construct three types of datasets using the RLBench simulator, namely robot grounded planning, multi-view referring expression and pseudo long-horizon task datasets. Gondola outperforms the state-of-the-art LLM-based method across all four generalization levels of the GemBench dataset, including novel placements, rigid objects, articulated objects and long-horizon tasks.
Problem

Research questions and friction points this paper is trying to address.

Generalizing robotic manipulation across unseen objects and environments
Improving grounded plan generation in visual environments
Addressing limitations of single-view input and object grounding
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-view images enhance object grounding
Interleaved text and segmentation masks
RLBench datasets for training
🔎 Similar Papers
No similar papers found.