🤖 AI Summary
This work investigates how the choice and capabilities of vision-language models (VLMs) influence their performance as vision-language-action (VLA) policies in embodied intelligence tasks. To this end, the authors propose VLM4VLA, a lightweight adaptation framework that efficiently converts general-purpose VLMs into VLA policies by introducing only a small number of learnable parameters. Through large-scale evaluation across three benchmarks and probing studies on seven categories of embodied auxiliary tasks, they find a persistent domain gap between VLM pretraining objectives and embodied control, indicating that generic VLM capabilities are not reliable predictors of downstream policy performance. The visual module emerges as a critical bottleneck: even when frozen, injecting control-relevant supervisory signals significantly enhances policy effectiveness. Remarkably, VLM4VLA matches or surpasses more complex architectures across multiple tasks, revealing that enhancing general embodied skills does not necessarily translate to improved control performance.
📝 Abstract
Vision-Language-Action (VLA) models, which integrate pretrained large Vision-Language Models (VLM) into their policy backbone, are gaining significant attention for their promising generalization capabilities. This paper revisits a fundamental yet seldom systematically studied question: how VLM choice and competence translate to downstream VLA policies performance? We introduce VLM4VLA, a minimal adaptation pipeline that converts general-purpose VLMs into VLA policies using only a small set of new learnable parameters for fair and efficient comparison. Despite its simplicity, VLM4VLA proves surprisingly competitive with more sophisticated network designs. Through extensive empirical studies on various downstream tasks across three benchmarks, we find that while VLM initialization offers a consistent benefit over training from scratch, a VLM's general capabilities are poor predictors of its downstream task performance. This challenges common assumptions, indicating that standard VLM competence is necessary but insufficient for effective embodied control. We further investigate the impact of specific embodied capabilities by fine-tuning VLMs on seven auxiliary embodied tasks (e.g., embodied QA, visual pointing, depth estimation). Contrary to intuition, improving a VLM's performance on specific embodied skills does not guarantee better downstream control performance. Finally, modality-level ablations identify the visual module in VLM, rather than the language component, as the primary performance bottleneck. We demonstrate that injecting control-relevant supervision into the vision encoder of the VLM yields consistent gains, even when the encoder remains frozen during downstream fine-tuning. This isolates a persistent domain gap between current VLM pretraining objectives and the requirements of embodied action-planning.