VP-VLA: Visual Prompting as an Interface for Vision-Language-Action Models

πŸ“… 2026-03-23
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Existing vision-language-action (VLA) models struggle to simultaneously achieve high spatial localization accuracy and strong out-of-distribution generalization due to the tight coupling of instruction understanding, spatial reasoning, and motor control within a single forward pass. To address this limitation, this work proposes VP-VLA, a dual-system framework that explicitly decouples high-level task planning from low-level action execution through structured visual promptsβ€”such as crosshairs or bounding boxes. In this architecture, a System 2 planner interprets language instructions and generates visual prompts indicating target locations, while a System 1 controller produces precise actions conditioned on these prompts, further enhanced by a visual localization loss to improve spatial understanding. Evaluated on Robocasa-GR1-Tabletop and SimplerEnv, VP-VLA achieves absolute success rate improvements of 5% and 8.3%, respectively, substantially outperforming baselines including QwenOFT and GR00T-N1.6.

Technology Category

Application Category

πŸ“ Abstract
Vision-Language-Action (VLA) models typically map visual observations and linguistic instructions directly to robotic control signals. This "black-box" mapping forces a single forward pass to simultaneously handle instruction interpretation, spatial grounding, and low-level control, often leading to poor spatial precision and limited robustness in out-of-distribution scenarios. To address these limitations, we propose VP-VLA, a dual-system framework that decouples high-level reasoning and low-level execution via a structured visual prompting interface. Specifically, a "System 2 Planner" decomposes complex instructions into sub-tasks and identifies relevant target objects and goal locations. These spatial anchors are then overlaid directly onto visual observations as structured visual prompts, such as crosshairs and bounding boxes. Guided by these prompts and enhanced by a novel auxiliary visual grounding objective during training, a "System 1 Controller" reliably generates precise low-level execution motions. Experiments on the Robocasa-GR1-Tabletop benchmark and SimplerEnv simulation demonstrate that VP-VLA improves success rates by 5% and 8.3%, surpassing competitive baselines including QwenOFT and GR00T-N1.6.
Problem

Research questions and friction points this paper is trying to address.

Vision-Language-Action models
spatial grounding
robustness
out-of-distribution
robotic control
Innovation

Methods, ideas, or system contributions that make the work stand out.

Visual Prompting
Vision-Language-Action Models
Dual-System Architecture
Spatial Grounding
Robotic Control
πŸ”Ž Similar Papers
No similar papers found.