Grounding Robot Generalization in Training Data via Retrieval-Augmented VLMs

📅 2026-03-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing methods struggle to accurately assess the generalization capability of robotic policies in novel scenarios and lack a clear characterization of the relationship between test tasks and training data. This work proposes RADAR, a framework that uniquely integrates retrieval augmentation with vision-language models (VLMs) to enable interpretable, data-driven generalization analysis through a two-stage pipeline: first retrieving relevant training samples using policy embeddings, then leveraging a VLM to compare the test task against the retrieved data and generate a multidimensional classification of generalization types. Experiments demonstrate that RADAR effectively validates the analytical capacity of VLMs under controlled conditions, with its retrieval module accurately identifying critical training instances and achieving strong alignment with human annotations on large-scale manipulation datasets.

Technology Category

Application Category

📝 Abstract
Recent work on robot manipulation has advanced policy generalization to novel scenarios. However, it is often difficult to characterize how different evaluation settings actually represent generalization from the training distribution of a given policy. To work towards more precise evaluation of generalization in robotics, we propose RADAR, a scalable framework for directly comparing test-time evaluation tasks to policy training data, to determine what form of policy generalization is required. RADAR consists of a two-stage pipeline: first, retrieval using generalist policy embeddings identifies which training examples are relevant for a given evaluation task. Next, vision-language models (VLMs) analyze the evaluation task against the retrieved data, outputting interpretable analysis on how they compare along a variety of axes, and an overall classification of what type of policy generalization is required. Through controlled experiments, we demonstrate that VLMs are effective at analyzing data for generalization, and that our retrieval step effectively identifies examples needed to make accurate classifications with respect to the training data. Furthermore, we scale RADAR to large-scale datasets, where we observe agreement with human-defined benchmark conditions from prior work. We provide demonstrations at radar-analysis.github.io.
Problem

Research questions and friction points this paper is trying to address.

robot generalization
training data
evaluation tasks
policy generalization
retrieval-augmented VLMs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Retrieval-Augmented VLMs
Policy Generalization
Robot Manipulation
Training Data Grounding
Interpretable Evaluation