🤖 AI Summary
This work addresses the challenge of bimanual manipulation, which demands policies capable of 3D geometric reasoning, dynamic scene prediction, and coordinated control. While existing approaches often rely on 2D features or require explicitly acquired point clouds, this paper presents the first unified framework that jointly models a pretrained 3D geometric foundation model with action prediction. Using only RGB images, the method constructs a compact state representation that fuses implicit geometric latents, 2D semantic cues, and proprioceptive information. A diffusion model is then employed to simultaneously predict future action sequences and 3D scene dynamics. Evaluated in both RoboTwin simulation and real-world robotic experiments, the approach significantly outperforms current 2D- and point-cloud-based baselines in task success rate, bimanual coordination, and 3D dynamic prediction accuracy, achieving state-of-the-art performance.
📝 Abstract
Bimanual manipulation requires policies that can reason about 3D geometry, anticipate how it evolves under action, and generate smooth, coordinated motions. However, existing methods typically rely on 2D features with limited spatial awareness, or require explicit point clouds that are difficult to obtain reliably in real-world settings. At the same time, recent 3D geometric foundation models show that accurate and diverse 3D structure can be reconstructed directly from RGB images in a fast and robust manner. We leverage this opportunity and propose a framework that builds bimanual manipulation directly on a pre-trained 3D geometric foundation model. Our policy fuses geometry-aware latents, 2D semantic features, and proprioception into a unified state representation, and uses diffusion model to jointly predict a future action chunk and a future 3D latent that decodes into a dense pointmap. By explicitly predicting how the 3D scene will evolve together with the action sequence, the policy gains strong spatial understanding and predictive capability using only RGB observations. We evaluate our method both in simulation on the RoboTwin benchmark and in real-world robot executions. Our approach consistently outperforms 2D-based and point-cloud-based baselines, achieving state-of-the-art performance in manipulation success, inter-arm coordination, and 3D spatial prediction accuracy. Code is available at https://github.com/Chongyang-99/GAP.git.