🤖 AI Summary
Current vision-language-action (VLA) models face two key bottlenecks: autoregressive paradigms impair action continuity, while pure diffusion approaches rely on static multimodal features, degrading reasoning capability. To address these, we propose the first unified VLA framework that innovatively embeds conditional diffusion modeling into an autoregressive language model architecture—enabling joint optimization of temporal coherence and physical plausibility in action generation. We further design an adaptive dual-strategy fusion mechanism that dynamically integrates autoregressive prediction and diffusion-based refinement during both training and inference. Leveraging multimodal feature alignment and end-to-end co-training, our model achieves state-of-the-art performance across simulated and real-world single- and dual-arm robotic tasks. Crucially, it demonstrates strong generalization to unseen robot configurations and robust dexterous manipulation stability.
📝 Abstract
Recent advancements in vision-language models (VLMs) for common-sense reasoning have led to the development of vision-language-action (VLA) models, enabling robots to perform generalized manipulation. Although existing autoregressive VLA methods leverage large-scale pretrained knowledge, they disrupt the continuity of actions. Meanwhile, some VLA methods incorporate an additional diffusion head to predict continuous actions, relying solely on VLM-extracted features, which limits their reasoning capabilities. In this paper, we introduce HybridVLA, a unified framework that seamlessly integrates the strengths of both autoregressive and diffusion policies within a single large language model, rather than simply connecting them. To bridge the generation gap, a collaborative training recipe is proposed that injects the diffusion modeling directly into the next-token prediction. With this recipe, we find that these two forms of action prediction not only reinforce each other but also exhibit varying performance across different tasks. Therefore, we design a collaborative action ensemble mechanism that adaptively fuses these two predictions, leading to more robust control. In experiments, HybridVLA outperforms previous state-of-the-art VLA methods across various simulation and real-world tasks, including both single-arm and dual-arm robots, while demonstrating stable manipulation in previously unseen configurations.