🤖 AI Summary
This work addresses the limitations of current vision-language-action (VLA) models, which struggle to leverage visual details effectively for high-quality action generation due to architectural biases, redundant visual tokens, and task-irrelevant noise. The study systematically demonstrates for the first time that the manner of visual utilization constitutes a critical bottleneck in VLA performance. To overcome this, the authors propose a novel “focused visual utilization” paradigm, introducing Modality Cascaded Attention to eliminate shortcut dependencies between modalities and a Focus Attention mechanism that dynamically selects task-relevant visual patches while suppressing noise. Evaluated on both simulated and real-world robotic benchmarks, the approach significantly enhances manipulation performance and convergence speed, enabling more dexterous task execution.
📝 Abstract
Vision-Language-Action (VLA) models improve action generation by conditioning policies on rich vision-language information. However, current auto-regressive policies are constrained by three bottlenecks: (1) architectural bias drives models to overlook visual details, (2) an excessive number of visual tokens makes attention difficult to focus on the correct regions, and (3) task-irrelevant visual information introduces substantial noise - together severely impairing the quality of action. In this paper, we investigate how to effectively utilize different visual representations for action generation. To this end, we first empirically validate the above issues and show that VLA performance is primarily limited by how visual information is utilized, rather than by the quality of visual representations. Based on these insights, we introduce FocusVLA, a novel paradigm that directs the model's attention to task-relevant visual regions to effectively bridge vision to action. Specifically, we first propose Modality Cascaded Attention to eliminate shortcut pathways, thereby compelling VLA models to rely on task-relevant visual details for action generation. Furthermore, we propose Focus Attention, which dynamically selects task-relevant visual patches to control information quantity while explicitly modulating their influence to suppress task-irrelevant noise. Extensive experiments on both simulated and real-world robotic benchmarks demonstrate that FocusVLA not only effectively leverages visual details to perform dexterous manipulations, but also substantially improves performance and accelerates convergence across a variety of tasks.