🤖 AI Summary
This work systematically investigates the integration paradigms of foundation models (FMs) in embodied robotics, focusing on complex instruction understanding and dexterous manipulation under dynamic environments. We quantitatively compare three paradigms—end-to-end vision-language-action (VLA) models, modular pipelines combining vision-language models (VLMs) with multimodal large language models (MLLMs)—on instruction grounding and object manipulation tasks, providing the first zero-shot and few-shot generalization evaluation across these settings. Results show that VLAs achieve superior manipulation transfer but suffer from low data efficiency; modular approaches demonstrate greater robustness in instruction grounding, with VLMs attaining 78.3% zero-shot accuracy; few-shot fine-tuning boosts VLA manipulation success by 41.6%. The study distills design principles for real-world embodied agents and identifies scalability—particularly in bridging perception, reasoning, and action—as a critical challenge.
📝 Abstract
Foundation models (FMs) are increasingly used to bridge language and action in embodied agents, yet the operational characteristics of different FM integration strategies remain under-explored -- particularly for complex instruction following and versatile action generation in changing environments. This paper examines three paradigms for building robotic systems: end-to-end vision-language-action (VLA) models that implicitly integrate perception and planning, and modular pipelines incorporating either vision-language models (VLMs) or multimodal large language models (LLMs). We evaluate these paradigms through two focused case studies: a complex instruction grounding task assessing fine-grained instruction understanding and cross-modal disambiguation, and an object manipulation task targeting skill transfer via VLA finetuning. Our experiments in zero-shot and few-shot settings reveal trade-offs in generalization and data efficiency. By exploring performance limits, we distill design implications for developing language-driven physical agents and outline emerging challenges and opportunities for FM-powered robotics in real-world conditions.