🤖 AI Summary
This work addresses the challenge of zero-shot object rearrangement for intelligent general-purpose robots operating in novel environments under natural language instructions (e.g., “place X at Y”). We propose a semantic-pose disentanglement framework that decomposes the task into three stages: object localization, target pose imagination, and robot control—decoupling semantic understanding from physical pose generation via type-level object interaction only. Our approach introduces the first integration of large vision models with a diffusion-based 3D pose estimator, trained exclusively on minimal synthetic data and GPT-4–generated automatic annotations, without scene-specific fine-tuning. Evaluated in both simulation and real-world settings, our method demonstrates strong zero-shot generalization to unseen objects, layouts, and instructions, while ensuring physically plausible and kinematically feasible placements.
📝 Abstract
General-purpose object placement is a fundamental capability of an intelligent generalist robot, i.e., being capable of rearranging objects following human instructions even in novel environments. To achieve this, we break the rearrangement down into three parts, including object localization, goal imagination and robot control, and propose a framework named SPORT. SPORT leverages pre-trained large vision models for broad semantic reasoning about objects, and learns a diffusion-based 3D pose estimator to ensure physically-realistic results. Only object types (to be moved or reference) are communicated between these two parts, which brings two benefits. One is that we can fully leverage the powerful ability of open-set object localization and recognition since no specific fine-tuning is needed for robotic scenarios. Furthermore, the diffusion-based estimator only need to"imagine"the poses of the moving and reference objects after the placement, while no necessity for their semantic information. Thus the training burden is greatly reduced and no massive training is required. The training data for goal pose estimation is collected in simulation and annotated with GPT-4. A set of simulation and real-world experiments demonstrate the potential of our approach to accomplish general-purpose object rearrangement, placing various objects following precise instructions.