🤖 AI Summary
This work addresses the challenge that large language models struggle to generalize to unseen tasks due to their lack of direct perception of the physical environment. To bridge this gap, the authors propose a method that integrates visual and linguistic knowledge by fine-tuning a vision-language model to generate spatiotemporally consistent, language-conditioned imagined trajectories. The approach further enhances trajectory diversity through counterfactual prompting, effectively aligning natural language instructions with environmental visual cues. When applied to offline reinforcement learning for robotic manipulation, the method significantly improves generalization performance, achieving a success rate on unseen tasks that exceeds the baseline by over 24%.
📝 Abstract
Combining Large Language Models (LLMs) with Reinforcement Learning (RL) enables agents to interpret language instructions more effectively for task execution. However, LLMs typically lack direct perception of the physical environment, which limits their understanding of environmental dynamics and their ability to generalize to unseen tasks. To address this limitation, we propose Visual-Language Knowledge-Guided Offline Reinforcement Learning (VLGOR), a framework that integrates visual and language knowledge to generate imaginary rollouts, thereby enriching the interaction data. The core premise of VLGOR is to fine-tune a vision-language model to predict future states and actions conditioned on an initial visual observation and high-level instructions, ensuring that the generated rollouts remain temporally coherent and spatially plausible. Furthermore, we employ counterfactual prompts to produce more diverse rollouts for offline RL training, enabling the agent to acquire knowledge that facilitates following language instructions while grounding in environments based on visual cues. Experiments on robotic manipulation benchmarks demonstrate that VLGOR significantly improves performance on unseen tasks requiring novel optimal policies, achieving a success rate over 24% higher than the baseline methods.