ISS Policy : Scalable Diffusion Policy with Implicit Scene Supervision

📅 2025-12-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Vision-based imitation learning often suffers from poor generalization and inefficient training due to its exclusive reliance on 2D appearance cues while neglecting underlying 3D scene structure. To address this, we propose a diffusion-based 3D visuomotor policy operating directly on point cloud inputs. Our method introduces the first implicit scene supervision module, explicitly embedding 3D geometric consistency into the denoising process of a Diffusion Transformer (DiT), thereby jointly guiding action sequence prediction and scene geometry evolution. The framework integrates point cloud encoding, continuous action sequence generation via diffusion, and implicit geometric supervision modeling. It achieves state-of-the-art performance on MetaWorld and Adroit benchmarks. Real-robot experiments demonstrate robustness and strong cross-task generalization. Ablation studies confirm stable performance gains with increasing data volume and model scale, underscoring both scalability and engineering practicality.

Technology Category

Application Category

📝 Abstract
Vision-based imitation learning has enabled impressive robotic manipulation skills, but its reliance on object appearance while ignoring the underlying 3D scene structure leads to low training efficiency and poor generalization. To address these challenges, we introduce emph{Implicit Scene Supervision (ISS) Policy}, a 3D visuomotor DiT-based diffusion policy that predicts sequences of continuous actions from point cloud observations. We extend DiT with a novel implicit scene supervision module that encourages the model to produce outputs consistent with the scene's geometric evolution, thereby improving the performance and robustness of the policy. Notably, ISS Policy achieves state-of-the-art performance on both single-arm manipulation tasks (MetaWorld) and dexterous hand manipulation (Adroit). In real-world experiments, it also demonstrates strong generalization and robustness. Additional ablation studies show that our method scales effectively with both data and parameters. Code and videos will be released.
Problem

Research questions and friction points this paper is trying to address.

Improves robotic manipulation via 3D scene structure
Enhances training efficiency and generalization in imitation learning
Scales effectively with increased data and model parameters
Innovation

Methods, ideas, or system contributions that make the work stand out.

Implicit Scene Supervision for geometric consistency
3D visuomotor DiT-based diffusion policy
Scales effectively with data and parameters
🔎 Similar Papers
No similar papers found.
W
Wenlong Xia
Jinhao Zhang
Jinhao Zhang
Harbin Institute of Technology, Shenzhen
Autonomous DrivingEmbodied AIGenerative Model
C
Ce Zhang
Y
Yaojia Wang
Y
Youmin Gong
J
Jie Mei