🤖 AI Summary
Diffusion policies (DPs) and their variants in behavior cloning suffer from poor generalization and principled design due to scarce paired demonstration data and opaque internal mechanisms. To address this, we propose a decoupled training paradigm: first, pretraining a generic action head on unobserved kinematic trajectories; second, enabling task adaptation via a lightweight feature modulation module. Crucially, we strictly confine task-specific knowledge to the conditioning module—revealing the limited functional role of the DP backbone—and replace the U-Net with an MLP, yielding the DP-MLP model: parameter-efficient and computationally superior. Experiments demonstrate that DP-MLP achieves up to 89.1% faster training while maintaining performance, and significantly improves generalization and deployment efficiency on both in-distribution and out-of-distribution robotic manipulation tasks.
📝 Abstract
Behavior Cloning (BC) is a data-driven supervised learning approach that has gained increasing attention with the success of scaling laws in language and vision domains. Among its implementations in robotic manipulation, Diffusion Policy (DP), with its two variants DP-CNN (DP-C) and DP-Transformer (DP-T), is one of the most effective and widely adopted models, demonstrating the advantages of predicting continuous action sequences. However, both DP and other BC methods remain constrained by the scarcity of paired training data, and the internal mechanisms underlying DP's effectiveness remain insufficiently understood, leading to limited generalization and a lack of principled design in model development. In this work, we propose a decoupled training recipe that leverages nearly cost-free kinematics-generated trajectories as observation-free data to pretrain a general action head (action generator). The pretrained action head is then frozen and adapted to novel tasks through feature modulation. Our experiments demonstrate the feasibility of this approach in both in-distribution and out-of-distribution scenarios. As an additional benefit, decoupling improves training efficiency; for instance, DP-C achieves up to a 41% speedup. Furthermore, the confinement of task-specific knowledge to the conditioning components under decoupling, combined with the near-identical performance of DP-C in both normal and decoupled training, indicates that the action generation backbone plays a limited role in robotic manipulation. Motivated by this observation, we introduce DP-MLP, which replaces the 244M-parameter U-Net backbone of DP-C with only 4M parameters of simple MLP blocks, achieving a 83.9% faster training speed under normal training and 89.1% under decoupling.