🤖 AI Summary
Pretrained vision-language-action (VLA) models lack in-context learning (ICL) capability, hindering zero-parameter adaptation to novel tasks via few-shot demonstrations. This work introduces Retrieval-Augmented In-Context Learning (RICL), the first method to retroactively endow imitation-learning-pretrained VLA models with ICL. RICL dynamically retrieves relevant demonstration segments from a small-scale robotic dataset and injects them into the model’s context window, enabling rapid, parameter-free task instruction. With only 10–20 demonstration examples, RICL significantly improves performance on unseen manipulation tasks; subsequent lightweight fine-tuning further enhances generalization. The approach bridges the gap between scalable pretraining and flexible, sample-efficient adaptation—without architectural modification or gradient-based updates during inference. To foster reproducibility and community advancement, we release both code and model weights, contributing to the development of foundational models for general-purpose robotics.
📝 Abstract
Multi-task ``vision-language-action'' (VLA) models have recently demonstrated increasing promise as generalist foundation models for robotics, achieving non-trivial performance out of the box on new tasks in new environments. However, for such models to be truly useful, an end user must have easy means to teach them to improve. For language and vision models, the emergent ability to perform in-context learning (ICL) has proven to be a versatile and highly useful interface to easily teach new tasks with no parameter finetuning. Unfortunately, VLAs pre-trained with imitation learning objectives do not naturally acquire ICL abilities. In this paper, we demonstrate that, with the right finetuning recipe and a small robot demonstration dataset, it is possible to inject in-context adaptability post hoc into such a VLA. After retraining for in-context learning (RICL), our system permits an end user to provide a small number (10-20) of demonstrations for a new task. RICL then fetches the most relevant portions of those demonstrations into the VLA context to exploit ICL, performing the new task and boosting task performance. We apply RICL to inject ICL into the $π_{0}$-FAST VLA, and show that it permits large in-context improvements for a variety of new manipulation tasks with only 20 demonstrations per task, without any parameter updates. When parameter updates on the target task demonstrations is possible, RICL finetuning further boosts performance. We release code and model weights for RICL-$π_{0}$-FAST alongside the paper to enable, for the first time, a simple in-context learning interface for new manipulation tasks. Website: https://ricl-vla.github.io.