ReconVLA: Reconstructive Vision-Language-Action Model as Effective Robot Perceiver

📅 2025-08-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing vision-language-action (VLA) models suffer from diffuse visual attention, hindering precise localization of target objects for manipulation. To address this, we propose a reconstruction-based VLA framework incorporating an implicit grounding mechanism: a diffusion Transformer reconstructs the robot’s foveal region in a self-supervised manner, implicitly guiding visual attention toward manipulable objects and enhancing fine-grained cross-modal representation learning. The model is pre-trained on a large-scale dataset comprising over 100K trajectories and 2M samples. Experiments demonstrate significant improvements in manipulation accuracy and cross-task generalization on both simulation and real-robot platforms, outperforming state-of-the-art VLA models. Our core contribution lies in establishing visual reconstruction as an implicit grounding paradigm for attention guidance—enabling joint optimization of object perception and action execution without requiring explicit grounding annotations.

Technology Category

Application Category

📝 Abstract
Recent advances in Vision-Language-Action (VLA) models have enabled robotic agents to integrate multimodal understanding with action execution. However, our empirical analysis reveals that current VLAs struggle to allocate visual attention to target regions. Instead, visual attention is always dispersed. To guide the visual attention grounding on the correct target, we propose ReconVLA, a reconstructive VLA model with an implicit grounding paradigm. Conditioned on the model's visual outputs, a diffusion transformer aims to reconstruct the gaze region of the image, which corresponds to the target manipulated objects. This process prompts the VLA model to learn fine-grained representations and accurately allocate visual attention, thus effectively leveraging task-specific visual information and conducting precise manipulation. Moreover, we curate a large-scale pretraining dataset comprising over 100k trajectories and 2 million data samples from open-source robotic datasets, further boosting the model's generalization in visual reconstruction. Extensive experiments in simulation and the real world demonstrate the superiority of our implicit grounding method, showcasing its capabilities of precise manipulation and generalization. Our project page is https://zionchow.github.io/ReconVLA/.
Problem

Research questions and friction points this paper is trying to address.

Current VLAs fail to focus visual attention on target regions
ReconVLA reconstructs gaze regions to improve attention allocation
Model enhances fine-grained representation for precise robotic manipulation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Reconstructive VLA model with implicit grounding
Diffusion transformer for gaze region reconstruction
Large-scale pretraining dataset for generalization
🔎 Similar Papers
No similar papers found.