Action-Geometry Prediction with 3D Geometric Prior for Bimanual Manipulation

📅 2026-02-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of bimanual manipulation, which demands policies capable of 3D geometric reasoning, dynamic scene prediction, and coordinated control. While existing approaches often rely on 2D features or require explicitly acquired point clouds, this paper presents the first unified framework that jointly models a pretrained 3D geometric foundation model with action prediction. Using only RGB images, the method constructs a compact state representation that fuses implicit geometric latents, 2D semantic cues, and proprioceptive information. A diffusion model is then employed to simultaneously predict future action sequences and 3D scene dynamics. Evaluated in both RoboTwin simulation and real-world robotic experiments, the approach significantly outperforms current 2D- and point-cloud-based baselines in task success rate, bimanual coordination, and 3D dynamic prediction accuracy, achieving state-of-the-art performance.

Technology Category

Application Category

📝 Abstract
Bimanual manipulation requires policies that can reason about 3D geometry, anticipate how it evolves under action, and generate smooth, coordinated motions. However, existing methods typically rely on 2D features with limited spatial awareness, or require explicit point clouds that are difficult to obtain reliably in real-world settings. At the same time, recent 3D geometric foundation models show that accurate and diverse 3D structure can be reconstructed directly from RGB images in a fast and robust manner. We leverage this opportunity and propose a framework that builds bimanual manipulation directly on a pre-trained 3D geometric foundation model. Our policy fuses geometry-aware latents, 2D semantic features, and proprioception into a unified state representation, and uses diffusion model to jointly predict a future action chunk and a future 3D latent that decodes into a dense pointmap. By explicitly predicting how the 3D scene will evolve together with the action sequence, the policy gains strong spatial understanding and predictive capability using only RGB observations. We evaluate our method both in simulation on the RoboTwin benchmark and in real-world robot executions. Our approach consistently outperforms 2D-based and point-cloud-based baselines, achieving state-of-the-art performance in manipulation success, inter-arm coordination, and 3D spatial prediction accuracy. Code is available at https://github.com/Chongyang-99/GAP.git.
Problem

Research questions and friction points this paper is trying to address.

bimanual manipulation
3D geometry
action prediction
spatial awareness
RGB-based perception
Innovation

Methods, ideas, or system contributions that make the work stand out.

3D geometric foundation model
bimanual manipulation
diffusion policy
action-geometry prediction
RGB-based 3D reconstruction
C
Chongyang Xu
College of Computer Science, Sichuan University, China
H
Haipeng Li
University of Electronic Science and Technology of China, China
Shen Cheng
Shen Cheng
Megvii Research
Deep Learning
Jingyu Hu
Jingyu Hu
The Chinese University of Hong Kong
AIGC3D GenerationComputer graphics
Haoqiang Fan
Haoqiang Fan
Megvii
computer vision
Z
Ziliang Feng
College of Computer Science, Sichuan University, China
Shuaicheng Liu
Shuaicheng Liu
University of Electronic Science and Technology of China
Computer VisionComputational Photography