🤖 AI Summary
In robot visuomotor policy learning, diffusion models achieve high accuracy but suffer from slow inference and inflexible constraint handling. To address this, we propose Coarse-to-Fine Autoregressive Policy (CARP), a two-stage action generation framework: first, a hierarchical representation is learned via an action autoencoder with multi-scale sequence modeling; second, a GPT-style Transformer progressively refines predictions in an autoregressive manner. CARP introduces the first “coarse-to-fine autoregressive” paradigm, preserving diffusion-level accuracy while significantly improving inference efficiency and task generalization. Experiments demonstrate that CARP achieves state-of-the-art performance on both simulated and real-robot tasks—improving success rates by up to 10% and accelerating inference by 10×—thereby effectively resolving the long-standing accuracy–speed–generalization trade-off.
📝 Abstract
In robotic visuomotor policy learning, diffusion-based models have achieved significant success in improving the accuracy of action trajectory generation compared to traditional autoregressive models. However, they suffer from inefficiency due to multiple denoising steps and limited flexibility from complex constraints. In this paper, we introduce Coarse-to-Fine AutoRegressive Policy (CARP), a novel paradigm for visuomotor policy learning that redefines the autoregressive action generation process as a coarse-to-fine, next-scale approach. CARP decouples action generation into two stages: first, an action autoencoder learns multi-scale representations of the entire action sequence; then, a GPT-style transformer refines the sequence prediction through a coarse-to-fine autoregressive process. This straightforward and intuitive approach produces highly accurate and smooth actions, matching or even surpassing the performance of diffusion-based policies while maintaining efficiency on par with autoregressive policies. We conduct extensive evaluations across diverse settings, including single-task and multi-task scenarios on state-based and image-based simulation benchmarks, as well as real-world tasks. CARP achieves competitive success rates, with up to a 10% improvement, and delivers 10x faster inference compared to state-of-the-art policies, establishing a high-performance, efficient, and flexible paradigm for action generation in robotic tasks.