PocketDP3: Efficient Pocket-Scale 3D Visuomotor Policy

📅 2026-01-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the parameter redundancy and low inference efficiency of existing 3D visual diffusion policies, which rely on heavy U-Net decoders. To overcome these limitations, we propose a lightweight 3D diffusion policy that introduces the MLP-Mixer architecture into the decoder for the first time, forming a Diffusion Mixer capable of efficiently integrating spatiotemporal and channel-wise information. This design enables two-step inference without requiring consistency distillation. By combining a point cloud encoder with the lightweight Diffusion Mixer, our method achieves state-of-the-art performance on three simulation benchmarks—RoboTwin2.0, Adroit, and MetaWorld—using less than 1% of the parameters of prior approaches. Furthermore, we demonstrate its feasibility for deployment and strong transferability on real-world robotic platforms.

Technology Category

Application Category

📝 Abstract
Recently, 3D vision-based diffusion policies have shown strong capability in learning complex robotic manipulation skills. However, a common architectural mismatch exists in these models: a tiny yet efficient point-cloud encoder is often paired with a massive decoder. Given a compact scene representation, we argue that this may lead to substantial parameter waste in the decoder. Motivated by this observation, we propose PocketDP3, a pocket-scale 3D diffusion policy that replaces the heavy conditional U-Net decoder used in prior methods with a lightweight Diffusion Mixer (DiM) built on MLP-Mixer blocks. This architecture enables efficient fusion across temporal and channel dimensions, significantly reducing model size. Notably, without any additional consistency distillation techniques, our method supports two-step inference without sacrificing performance, improving practicality for real-time deployment. Across three simulation benchmarks--RoboTwin2.0, Adroit, and MetaWorld--PocketDP3 achieves state-of-the-art performance with fewer than 1% of the parameters of prior methods, while also accelerating inference. Real-world experiments further demonstrate the practicality and transferability of our method in real-world settings. Code will be released.
Problem

Research questions and friction points this paper is trying to address.

3D vision-based diffusion policy
architectural mismatch
parameter efficiency
real-time deployment
robotic manipulation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Diffusion Policy
MLP-Mixer
3D Vision-Based Control
Parameter Efficiency
Real-Time Inference
🔎 Similar Papers
No similar papers found.