Decoupled Action Head: Confining Task Knowledge to Conditioning Layers

📅 2025-11-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Diffusion policies (DPs) and their variants in behavior cloning suffer from poor generalization and principled design due to scarce paired demonstration data and opaque internal mechanisms. To address this, we propose a decoupled training paradigm: first, pretraining a generic action head on unobserved kinematic trajectories; second, enabling task adaptation via a lightweight feature modulation module. Crucially, we strictly confine task-specific knowledge to the conditioning module—revealing the limited functional role of the DP backbone—and replace the U-Net with an MLP, yielding the DP-MLP model: parameter-efficient and computationally superior. Experiments demonstrate that DP-MLP achieves up to 89.1% faster training while maintaining performance, and significantly improves generalization and deployment efficiency on both in-distribution and out-of-distribution robotic manipulation tasks.

Technology Category

Application Category

📝 Abstract
Behavior Cloning (BC) is a data-driven supervised learning approach that has gained increasing attention with the success of scaling laws in language and vision domains. Among its implementations in robotic manipulation, Diffusion Policy (DP), with its two variants DP-CNN (DP-C) and DP-Transformer (DP-T), is one of the most effective and widely adopted models, demonstrating the advantages of predicting continuous action sequences. However, both DP and other BC methods remain constrained by the scarcity of paired training data, and the internal mechanisms underlying DP's effectiveness remain insufficiently understood, leading to limited generalization and a lack of principled design in model development. In this work, we propose a decoupled training recipe that leverages nearly cost-free kinematics-generated trajectories as observation-free data to pretrain a general action head (action generator). The pretrained action head is then frozen and adapted to novel tasks through feature modulation. Our experiments demonstrate the feasibility of this approach in both in-distribution and out-of-distribution scenarios. As an additional benefit, decoupling improves training efficiency; for instance, DP-C achieves up to a 41% speedup. Furthermore, the confinement of task-specific knowledge to the conditioning components under decoupling, combined with the near-identical performance of DP-C in both normal and decoupled training, indicates that the action generation backbone plays a limited role in robotic manipulation. Motivated by this observation, we introduce DP-MLP, which replaces the 244M-parameter U-Net backbone of DP-C with only 4M parameters of simple MLP blocks, achieving a 83.9% faster training speed under normal training and 89.1% under decoupling.
Problem

Research questions and friction points this paper is trying to address.

Limited generalization in behavior cloning due to scarce paired training data
Lack of understanding about internal mechanisms in Diffusion Policy models
Inefficient model design with oversized backbones for robotic manipulation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Pretrains action head with kinematics-generated trajectories
Freezes pretrained head and adapts via feature modulation
Replaces U-Net backbone with lightweight MLP blocks
🔎 Similar Papers
J
Jian Zhou
Australian Institute for Machine Learning, University of Adelaide, SA, Australia
Sihao Lin
Sihao Lin
Postdoc, AIML, The University of Adelaide
Artificial intelligencePattern recognitionVision-language model
Shuai Fu
Shuai Fu
Australian Institute for Machine Learning, University of Adelaide, SA, Australia
Q
Qi Wu
Australian Institute for Machine Learning, University of Adelaide, SA, Australia