๐ค AI Summary
This work addresses the key challenge of enabling embodied foundation models to generalize effectively to unseen user behaviors and task configurations in open-world settingsโa critical requirement for open-ended assistive intelligence. It presents the first systematic exploration of open-set corrective assistance, moving beyond prior approaches that rely on closed-class categories or external planners. By constructing an interactive synthetic dataset in the Overcooked environment that encompasses multimodal alignment, error reasoning, and scene diversity, the authors fine-tune a LLaMA-based multimodal model to identify user behavioral flaws and generate corrective feedback in both language and action. Experiments demonstrate that data encompassing multiple dimensions of assistive interaction significantly enhances the modelโs generalization to novel behaviors and configurations, offering principled guidelines for data design and a viable technical pathway toward open-set embodied assistive intelligence.
๐ Abstract
Embodied foundation models are increasingly performant in real-world domains such as robotics or autonomous driving. These models are often deployed in interactive or assistive settings, where it is important that these assistive models generalize to new users and new tasks. Diverse interactive data generation offers a promising avenue for providing data-efficient generalization capabilities for interactive embodied foundation models. In this paper, we investigate the generalization capabilities of a multimodal foundation model fine-tuned on diverse interactive assistance data in a synthetic domain. We explore generalization along two axes: a) assistance with unseen categories of user behavior and b) providing guidance in new configurations not encountered during training. We study a broad capability called \textbf{Open-Set Corrective Assistance}, in which the model needs to inspect lengthy user behavior and provide assistance through either corrective actions or language-based feedback. This task remains unsolved in prior work, which typically assumes closed corrective categories or relies on external planners, making it a challenging testbed for evaluating the limits of assistive data. To support this task, we generate synthetic assistive datasets in Overcooked and fine-tune a LLaMA-based model to evaluate generalization to novel tasks and user behaviors. Our approach provides key insights into the nature of assistive datasets required to enable open-set assistive intelligence. In particular, we show that performant models benefit from datasets that cover different aspects of assistance, including multimodal grounding, defect inference, and exposure to diverse scenarios.