🤖 AI Summary
This work addresses the poor generalization in robot task execution caused by rigid target-state specifications. We propose a target-adaptive framework grounded in environmental variation modeling: from a single human demonstration, it learns causal patterns of environmental state changes, constructs a transferable environmental variation model, and leverages this model to autonomously re-target to any semantically compatible goal state under task-level constraints. Integrated with goal-conditioned motion planning, the framework generates robust, executable policies. Our key contribution is the first incorporation of environmental variation modeling into target-adaptive planning—enabling “one-shot demonstration, multiple-goal” generalization. Extensive evaluation on real robotic platforms demonstrates substantial improvements in task success rate and cross-goal robustness.
📝 Abstract
This paper presents a framework to define a task with freedom and variability in its goal state. A robot could use this to observe the execution of a task and target a different goal from the observed one; a goal that is still compatible with the task description but would be easier for the robot to execute. We define the model of an environment state and an environment variation, and present experiments on how to interactively create the variation from a single task demonstration and how to use this variation to create an execution plan for bringing any environment into the goal state.