RealVLG-R1: A Large-Scale Real-World Visual-Language Grounding Benchmark for Robotic Perception and Manipulation

📅 2026-03-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitations of existing vision-language grounding methods, which support only coarse-grained object-level localization, and conventional robotic grasping approaches that lack language guidance for fine-grained semantic manipulation. To bridge this gap, the authors propose RealVLG, a novel framework that introduces RealVLG-11B—a large-scale, real-world multimodal benchmark comprising bounding boxes, segmentation masks, grasp poses, contact points, and human-verified fine-grained language descriptions. Building upon pretrained vision-language models, they further develop RealVLG-R1, which leverages reinforcement-based fine-tuning to jointly predict multimodal outputs. The approach enables zero-shot, language-driven perception and grasping in previously unseen real-world environments, unifying semantic understanding, visual grounding, and action execution. Code and data are publicly released.

Technology Category

Application Category

📝 Abstract
Visual-language grounding aims to establish semantic correspondences between natural language and visual entities, enabling models to accurately identify and localize target objects based on textual instructions. Existing VLG approaches focus on coarse-grained, object-level localization, while traditional robotic grasping methods rely predominantly on geometric cues and lack language guidance, which limits their applicability in language-driven manipulation scenarios. To address these limitations, we propose the RealVLG framework, which integrates the RealVLG-11B dataset and the RealVLG-R1 model to unify real-world visual-language grounding and grasping tasks. RealVLG-11B dataset provides multi-granularity annotations including bounding boxes, segmentation masks, grasp poses, contact points, and human-verified fine-grained language descriptions, covering approximately 165,000 images, over 800 object instances, 1.3 million segmentation, detection, and language annotations, and roughly 11 billion grasping examples. Building on this dataset, RealVLG-R1 employs Reinforcement Fine-tuning on pretrained large-scale vision-language models to predict bounding boxes, segmentation masks, grasp poses, and contact points in a unified manner given natural language instructions. Experimental results demonstrate that RealVLG supports zero-shot perception and manipulation in real-world unseen environments, establishing a unified semantic-visual multimodal benchmark that provides a comprehensive data and evaluation platform for language-driven robotic perception and grasping policy learning. All data and code are publicly available at https://github.com/lif314/RealVLG-R1.
Problem

Research questions and friction points this paper is trying to address.

visual-language grounding
robotic manipulation
language-driven grasping
fine-grained localization
multimodal perception
Innovation

Methods, ideas, or system contributions that make the work stand out.

visual-language grounding
robotic manipulation
multimodal benchmark
reinforcement fine-tuning
zero-shot grasping
🔎 Similar Papers
No similar papers found.
Linfei Li
Linfei Li
Phd Student, Tongji University
Computer VisionRobot Learning
L
Lin Zhang
School of Computer Science and Technology, Tongji University, China
Y
Ying Shen
School of Computer Science and Technology, Tongji University, China