🤖 AI Summary
Current vision-language-action (VLA) models face a fundamental trade-off: real-time robotic control imposes strict latency constraints, preventing integration of large language models (LLMs), which limits semantic understanding—especially in tasks involving relative spatial references or visual object disambiguation among duplicates. To address this, we propose IA-VLA, an input-augmentation framework that employs a large vision-language model (LVLM) as a lightweight, offline semantic preprocessor. The LVLM dynamically generates context-rich, spatially grounded prompts to enhance the input representation fed to the downstream VLA model. We design two efficient augmentation variants and evaluate them systematically on complex instructions featuring visual object repetitions. Experiments demonstrate substantial improvements in zero-shot compositional generalization and performance on visually indistinguishable object manipulation—validating the efficacy of the “large-model preprocessing for small-model efficiency” paradigm.
📝 Abstract
Vision-language-action models (VLAs) have become an increasingly popular approach for addressing robot manipulation problems in recent years. However, such models need to output actions at a rate suitable for robot control, which limits the size of the language model they can be based on, and consequently, their language understanding capabilities. Manipulation tasks may require complex language instructions, such as identifying target objects by their relative positions, to specify human intention. Therefore, we introduce IA-VLA, a framework that utilizes the extensive language understanding of a large vision language model as a pre-processing stage to generate improved context to augment the input of a VLA. We evaluate the framework on a set of semantically complex tasks which have been underexplored in VLA literature, namely tasks involving visual duplicates, i.e., visually indistinguishable objects. A dataset of three types of scenes with duplicate objects is used to compare a baseline VLA against two augmented variants. The experiments show that the VLA benefits from the augmentation scheme, especially when faced with language instructions that require the VLA to extrapolate from concepts it has seen in the demonstrations. For the code, dataset, and videos, see https://sites.google.com/view/ia-vla.