SpatialActor: Exploring Disentangled Spatial Representations for Robust Robotic Manipulation

๐Ÿ“… 2025-11-12
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Existing robotic manipulation methods suffer from coarse-grained semantic representation on sparse point clouds and semantic-geometric entanglement on RGB-D images due to 2D backbone networks, while remaining sensitive to depth noise and lacking low-level spatial cues. To address these issues, we propose SpatialActorโ€”the first framework to explicitly decouple semantic and geometric representations. It introduces a semantics-guided geometric fusion module incorporating expert priors and employs a Spatial Transformer to model fine-grained spatial feature interactions, enabling robust 2Dโ€“3D correspondence. SpatialActor jointly leverages point clouds and RGB-D images, built upon a pre-trained 2D backbone and the RLBench multi-task reinforcement learning framework. Evaluated on over 50 simulated and real-world tasks, it achieves state-of-the-art performance: 87.4% success rate on RLBench, with relative improvements of 13.9%โ€“19.4% under depth noise, and demonstrates strong few-shot generalization capability.

Technology Category

Application Category

๐Ÿ“ Abstract
Robotic manipulation requires precise spatial understanding to interact with objects in the real world. Point-based methods suffer from sparse sampling, leading to the loss of fine-grained semantics. Image-based methods typically feed RGB and depth into 2D backbones pre-trained on 3D auxiliary tasks, but their entangled semantics and geometry are sensitive to inherent depth noise in real-world that disrupts semantic understanding. Moreover, these methods focus on high-level geometry while overlooking low-level spatial cues essential for precise interaction. We propose SpatialActor, a disentangled framework for robust robotic manipulation that explicitly decouples semantics and geometry. The Semantic-guided Geometric Module adaptively fuses two complementary geometry from noisy depth and semantic-guided expert priors. Also, a Spatial Transformer leverages low-level spatial cues for accurate 2D-3D mapping and enables interaction among spatial features. We evaluate SpatialActor on multiple simulation and real-world scenarios across 50+ tasks. It achieves state-of-the-art performance with 87.4% on RLBench and improves by 13.9% to 19.4% under varying noisy conditions, showing strong robustness. Moreover, it significantly enhances few-shot generalization to new tasks and maintains robustness under various spatial perturbations. Project Page: https://shihao1895.github.io/SpatialActor
Problem

Research questions and friction points this paper is trying to address.

Robotic manipulation requires precise spatial understanding of object interactions
Existing methods suffer from entangled semantics and geometry sensitive to depth noise
Current approaches overlook low-level spatial cues essential for precise robotic interaction
Innovation

Methods, ideas, or system contributions that make the work stand out.

Disentangled framework decouples semantics and geometry
Semantic-guided module fuses geometry from depth and priors
Spatial Transformer uses low-level cues for 2D-3D mapping
๐Ÿ”Ž Similar Papers
H
Hao Shi
Department of Automation, BNRist, Tsinghua University
Bin Xie
Bin Xie
InfoBeyond Technology LLC
Mobile ComuptingSecurityBig Data Streaming
Yingfei Liu
Yingfei Liu
Megvii Technology
Y
Yang Yue
Department of Automation, BNRist, Tsinghua University
Tiancai Wang
Tiancai Wang
Dexmal
Computer VisionEmbodied AI
Haoqiang Fan
Haoqiang Fan
Megvii
computer vision
X
Xiangyu Zhang
MEGVII Technology, StepFun
G
Gao Huang
Department of Automation, BNRist, Tsinghua University