In-Context Learning Enables Robot Action Prediction in LLMs

πŸ“… 2024-10-16
πŸ›οΈ arXiv.org
πŸ“ˆ Citations: 1
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Bridging the modality gap between vision-action data in embodied tasks and purely text-based large language models (LLMs) remains challenging for zero-training robotic action prediction. Method: We propose RoboPrompt, a framework that enables direct LLM-driven action generation without fine-tuning or visual encoders. It employs keyframe heuristics for scene selection and lossless textual encoding of end-effector actions and object poses, constructing structured in-context learning (ICL) templates that map multimodal embodied information to LLM-compatible textual prompts end-to-end. Contribution/Results: Evaluated on both simulation and real-robot tasks, RoboPrompt achieves up to 37% higher action prediction accuracy than zero-shot and conventional ICL baselines. It demonstrates, for the first time, the feasibility of using off-the-shelf, purely textual LLMsβ€”such as Llama-3 and GPT-3.5β€”for embodied action reasoning without any training, parameter updates, or auxiliary vision modules.

Technology Category

Application Category

πŸ“ Abstract
Recently, Large Language Models (LLMs) have achieved remarkable success using in-context learning (ICL) in the language domain. However, leveraging the ICL capabilities within LLMs to directly predict robot actions remains largely unexplored. In this paper, we introduce RoboPrompt, a framework that enables off-the-shelf text-only LLMs to directly predict robot actions through ICL without training. Our approach first heuristically identifies keyframes that capture important moments from an episode. Next, we extract end-effector actions from these keyframes as well as the estimated initial object poses, and both are converted into textual descriptions. Finally, we construct a structured template to form ICL demonstrations from these textual descriptions and a task instruction. This enables an LLM to directly predict robot actions at test time. Through extensive experiments and analysis, RoboPrompt shows stronger performance over zero-shot and ICL baselines in simulated and real-world settings.
Problem

Research questions and friction points this paper is trying to address.

Predict robot actions using LLMs without training
Leverage in-context learning for robot action prediction
Convert keyframes and poses into textual descriptions for LLMs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses in-context learning for robot action prediction
Converts keyframes and poses into textual descriptions
Enables LLMs to predict actions without training
πŸ”Ž Similar Papers
No similar papers found.