🤖 AI Summary
This work addresses the challenges of task planning for embodied agents in complex environments requiring multi-turn interaction, long-horizon reasoning, and rich scene understanding. The authors propose RoboAgent, a novel framework that introduces a chain-of-capabilities scheduling mechanism grounded in a single vision-language model. A dedicated scheduler decomposes complex tasks into manageable subproblems, enabling transparent and controllable multi-stage planning without reliance on external tools. The system is trained via a three-stage paradigm combining behavioral cloning, DAgger, and reinforcement learning, leveraging both simulator internal states and synthetic data to optimize individual modules. Experimental results demonstrate that RoboAgent significantly outperforms existing methods on established embodied task planning benchmarks, validating its effectiveness and generalization capability.
📝 Abstract
This paper focuses on embodied task planning, where an agent acquires visual observations from the environment and executes atomic actions to accomplish a given task. Although recent Vision-Language Models (VLMs) have achieved impressive results in multimodal understanding and reasoning, their performance remains limited when applied to embodied planning that involves multi-turn interaction, long-horizon reasoning, and extended context analysis. To bridge this gap, we propose RoboAgent, a capability-driven planning pipeline in which the model actively invokes different sub-capabilities. Each capability maintains its own context, and produces intermediate reasoning results or interacts with the environment according to the query given by a scheduler. This framework decomposes complex planning into a sequence of basic vision-language problems that VLMs can better address, enabling a more transparent and controllable reasoning process. The scheduler and all capabilities are implemented with a single VLM, without relying on external tools. To train this VLM, we adopt a multi-stage paradigm that consists of: (1) behavior cloning with expert plans, (2) DAgger training using trajectories collected by the model, and (3) reinforcement learning guided by an expert policy. Across these stages, we exploit the internal information of the environment simulator to construct high-quality supervision for each capability, and we further introduce augmented and synthetic data to enhance the model's performance in more diverse scenarios. Extensive experiments on widely used embodied task planning benchmarks validate the effectiveness of the proposed approach. Our codes will be available at https://github.com/woyut/RoboAgent_CVPR26.