๐ค AI Summary
Existing robotic manipulation methods suffer from coarse-grained semantic representation on sparse point clouds and semantic-geometric entanglement on RGB-D images due to 2D backbone networks, while remaining sensitive to depth noise and lacking low-level spatial cues. To address these issues, we propose SpatialActorโthe first framework to explicitly decouple semantic and geometric representations. It introduces a semantics-guided geometric fusion module incorporating expert priors and employs a Spatial Transformer to model fine-grained spatial feature interactions, enabling robust 2Dโ3D correspondence. SpatialActor jointly leverages point clouds and RGB-D images, built upon a pre-trained 2D backbone and the RLBench multi-task reinforcement learning framework. Evaluated on over 50 simulated and real-world tasks, it achieves state-of-the-art performance: 87.4% success rate on RLBench, with relative improvements of 13.9%โ19.4% under depth noise, and demonstrates strong few-shot generalization capability.
๐ Abstract
Robotic manipulation requires precise spatial understanding to interact with objects in the real world. Point-based methods suffer from sparse sampling, leading to the loss of fine-grained semantics. Image-based methods typically feed RGB and depth into 2D backbones pre-trained on 3D auxiliary tasks, but their entangled semantics and geometry are sensitive to inherent depth noise in real-world that disrupts semantic understanding. Moreover, these methods focus on high-level geometry while overlooking low-level spatial cues essential for precise interaction. We propose SpatialActor, a disentangled framework for robust robotic manipulation that explicitly decouples semantics and geometry. The Semantic-guided Geometric Module adaptively fuses two complementary geometry from noisy depth and semantic-guided expert priors. Also, a Spatial Transformer leverages low-level spatial cues for accurate 2D-3D mapping and enables interaction among spatial features. We evaluate SpatialActor on multiple simulation and real-world scenarios across 50+ tasks. It achieves state-of-the-art performance with 87.4% on RLBench and improves by 13.9% to 19.4% under varying noisy conditions, showing strong robustness. Moreover, it significantly enhances few-shot generalization to new tasks and maintains robustness under various spatial perturbations. Project Page: https://shihao1895.github.io/SpatialActor