🤖 AI Summary
This work addresses the parameter redundancy and low inference efficiency of existing 3D visual diffusion policies, which rely on heavy U-Net decoders. To overcome these limitations, we propose a lightweight 3D diffusion policy that introduces the MLP-Mixer architecture into the decoder for the first time, forming a Diffusion Mixer capable of efficiently integrating spatiotemporal and channel-wise information. This design enables two-step inference without requiring consistency distillation. By combining a point cloud encoder with the lightweight Diffusion Mixer, our method achieves state-of-the-art performance on three simulation benchmarks—RoboTwin2.0, Adroit, and MetaWorld—using less than 1% of the parameters of prior approaches. Furthermore, we demonstrate its feasibility for deployment and strong transferability on real-world robotic platforms.
📝 Abstract
Recently, 3D vision-based diffusion policies have shown strong capability in learning complex robotic manipulation skills. However, a common architectural mismatch exists in these models: a tiny yet efficient point-cloud encoder is often paired with a massive decoder. Given a compact scene representation, we argue that this may lead to substantial parameter waste in the decoder. Motivated by this observation, we propose PocketDP3, a pocket-scale 3D diffusion policy that replaces the heavy conditional U-Net decoder used in prior methods with a lightweight Diffusion Mixer (DiM) built on MLP-Mixer blocks. This architecture enables efficient fusion across temporal and channel dimensions, significantly reducing model size. Notably, without any additional consistency distillation techniques, our method supports two-step inference without sacrificing performance, improving practicality for real-time deployment. Across three simulation benchmarks--RoboTwin2.0, Adroit, and MetaWorld--PocketDP3 achieves state-of-the-art performance with fewer than 1% of the parameters of prior methods, while also accelerating inference. Real-world experiments further demonstrate the practicality and transferability of our method in real-world settings. Code will be released.