🤖 AI Summary
Existing object-centric representations struggle to distinguish target objects, obstacles, and the robot’s own body in multi-object scenes, leading to poor generalization of vision-driven manipulation policies. To address this, we propose the Decoupled Object-Centric Implicit Representation (DOCIR) framework—the first method to explicitly disentangle visual representations of target objects, obstacles, and the robot’s body. DOCIR integrates self-supervised object discovery, a spatial-semantic disentangled encoder, and a contrastive attention mechanism, and is trained end-to-end via reinforcement learning. Evaluated on multi-object pick-and-place tasks, DOCIR achieves state-of-the-art performance, supports dynamic target and distractor substitution at test time, and enables zero-shot sim-to-real transfer. It significantly improves task robustness and cross-environment generalization.
📝 Abstract
Learning robotic manipulation skills from vision is a promising approach for developing robotics applications that can generalize broadly to real-world scenarios. As such, many approaches to enable this vision have been explored with fruitful results. Particularly, object-centric representation methods have been shown to provide better inductive biases for skill learning, leading to improved performance and generalization. Nonetheless, we show that object-centric methods can struggle to learn simple manipulation skills in multi-object environments. Thus, we propose DOCIR, an object-centric framework that introduces a disentangled representation for objects of interest, obstacles, and robot embodiment. We show that this approach leads to state-of-the-art performance for learning pick and place skills from visual inputs in multi-object environments and generalizes at test time to changing objects of interest and distractors in the scene. Furthermore, we show its efficacy both in simulation and zero-shot transfer to the real world.