🤖 AI Summary
Existing contextual imitation learning methods rely solely on state-action trajectories, making it difficult to capture task intent and limiting their generalization in complex or ambiguous scenarios. This work proposes the first explicit integration of embodied visual reasoning into this framework by incorporating structured visual reasoning traces—specifically, anticipated future paths in image space—into demonstration prompts. Within a unified autoregressive Transformer architecture, the model jointly generates these reasoning traces and low-level actions, enabling it not only to imitate behaviors but also to learn the underlying decision logic. The approach significantly improves success rates in both simulated and real-world manipulation tasks and demonstrates enhanced generalization to unseen tasks and novel object configurations.
📝 Abstract
In-context imitation learning enables robots to adapt to new tasks from a small number of demonstrations without additional training. However, existing approaches typically condition only on state-action trajectories and lack explicit representations of task intent. This limitation hinders performance in complex and ambiguous task settings where the same actions may be consistent with different objectives. To address this, we present In-Context Imitation Learning with Visual Reasoning (ICLR), a novel framework that augments demonstration prompts with structured visual reasoning traces representing anticipated future robot trajectories in image space. ICLR also jointly learns to generate reasoning traces and low-level actions within a unified autoregressive transformer, enabling the model to mimic not only action prediction but also the reasoning process that leads to those actions. We extensively evaluate ICLR in both simulation and real-world manipulation tasks and demonstrate consistent improvements in success rates and generalization to unseen tasks and novel object configurations compared to other in-context imitation learning methods. These results suggest that incorporating embodied visual reasoning represents a promising direction for enhancing the robustness and generalization of robotic in-context learning systems.