Vision-Language-Policy Model for Dynamic Robot Task Planning

📅 2025-12-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To bridge the semantic gap between natural language instructions and autonomous robotic execution in unstructured environments, this paper proposes the first end-to-end vision-language-policy joint model that unifies multimodal perception, semantic understanding, and dynamic behavior planning. Methodologically, a vision-language model (VLM), fine-tuned on real-world data, is tightly coupled with modules for scene understanding, instruction parsing, and policy generation; it is trained via a synergistic combination of reinforcement learning and imitation learning. The resulting framework enables real-time replanning upon in-execution instruction changes and exhibits cross-robot embodiment generalization. Experiments demonstrate a policy update latency under 0.8 seconds, cross-platform transfer success rate exceeding 89%, and significantly improved adaptability to novel tasks and responsiveness to dynamic environmental changes.

Technology Category

Application Category

📝 Abstract
Bridging the gap between natural language commands and autonomous execution in unstructured environments remains an open challenge for robotics. This requires robots to perceive and reason over the current task scene through multiple modalities, and to plan their behaviors to achieve their intended goals. Traditional robotic task-planning approaches often struggle to bridge low-level execution with high-level task reasoning, and cannot dynamically update task strategies when instructions change during execution, which ultimately limits their versatility and adaptability to new tasks. In this work, we propose a novel language model-based framework for dynamic robot task planning. Our Vision-Language-Policy (VLP) model, based on a vision-language model fine-tuned on real-world data, can interpret semantic instructions and integrate reasoning over the current task scene to generate behavior policies that control the robot to accomplish the task. Moreover, it can dynamically adjust the task strategy in response to changes in the task, enabling flexible adaptation to evolving task requirements. Experiments conducted with different robots and a variety of real-world tasks show that the trained model can efficiently adapt to novel scenarios and dynamically update its policy, demonstrating strong planning autonomy and cross-embodiment generalization. Videos: https://robovlp.github.io/
Problem

Research questions and friction points this paper is trying to address.

Bridge natural language commands with autonomous robot execution
Enable dynamic task strategy updates during robot operation
Enhance robot adaptability to novel and unstructured environments
Innovation

Methods, ideas, or system contributions that make the work stand out.

Vision-language model fine-tuned for real-world robot planning
Dynamic policy adjustment to changing task instructions during execution
Cross-embodiment generalization across different robots and tasks
🔎 Similar Papers
No similar papers found.