🤖 AI Summary
This work addresses the challenge in imitation learning where multimodal action sequence distributions hinder the simultaneous achievement of action consistency and fine-grained variation. Existing approaches face a trade-off between losing detail through discretization and instability in continuous generation. To resolve this, the paper proposes PF-DAG, a two-stage framework that explicitly decouples the backbone motion patterns from their fine variations: it first selects discrete motion primitives to ensure coarse consistency, then employs a mode-conditioned MeanFlow to generate high-fidelity continuous actions. Theoretically, the method achieves a strictly tighter upper bound on mean squared error than single-stage strategies and supports reactive closed-loop control. Evaluated across 56 tasks in Adroit, DexArt, and MetaWorld, PF-DAG outperforms state-of-the-art methods and demonstrates successful generalization to real-world tactile dexterous manipulation.
📝 Abstract
Multi-modal distribution in robotic manipulation action sequences poses critical challenges for imitation learning. To this end, existing approaches often model the action space as either a discrete set of tokens or a continuous, latent-variable distribution. However, both approaches present trade-offs: some methods discretize actions into tokens and therefore lose fine-grained action variations, while others generate continuous actions in a single stage tend to produce unstable mode transitions. To address these limitations, we propose Primary-Fine Decoupling for Action Generation (PF-DAG), a two-stage framework that decouples coarse action consistency from fine-grained variations. First, we compress action chunks into a small set of discrete modes, enabling a lightweight policy to select consistent coarse modes and avoid mode bouncing. Second, a mode conditioned MeanFlow policy is learned to generate high-fidelity continuous actions. Theoretically, we prove PF-DAG's two-stage design achieves a strictly lower MSE bound than single-stage generative policies. Empirically, PF-DAG outperforms state-of-the-art baselines across 56 tasks from Adroit, DexArt, and MetaWorld benchmarks. It further generalizes to real-world tactile dexterous manipulation tasks. Our work demonstrates that explicit mode-level decoupling enables both robust multi-modal modeling and reactive closed-loop control for robotic manipulation.