๐ค AI Summary
Existing vision-language-action (VLA) models for autonomous driving struggle to simultaneously achieve robust spatial perception and high-level semantic reasoning, limiting their overall performance. To address this challenge, this work proposes UniDriveVLA, which innovatively decouples these two capabilities into dedicated expert modules within a hybrid Transformer architecture. The framework integrates masked joint attention, sparse-aware representations, and a three-stage progressive training strategy to enable efficient and synergistic optimization. Evaluated on both the nuScenes open-loop benchmark and the Bench2Drive closed-loop benchmark, UniDriveVLA achieves state-of-the-art performance, significantly outperforming existing methods across multiple tasksโincluding 3D object detection, online mapping, motion prediction, and driving-oriented visual question answering.
๐ Abstract
Vision-Language-Action (VLA) models have recently emerged in autonomous driving, with the promise of leveraging rich world knowledge to improve the cognitive capabilities of driving systems. However, adapting such models for driving tasks currently faces a critical dilemma between spatial perception and semantic reasoning. Consequently, existing VLA systems are forced into suboptimal compromises: directly adopting 2D Vision-Language Models yields limited spatial perception, whereas enhancing them with 3D spatial representations often impairs the native reasoning capacity of VLMs. We argue that this dilemma largely stems from the coupled optimization of spatial perception and semantic reasoning within shared model parameters. To overcome this, we propose UniDriveVLA, a Unified Driving Vision-Language-Action model based on Mixture-of-Transformers that addresses the perception-reasoning conflict via expert decoupling. Specifically, it comprises three experts for driving understanding, scene perception, and action planning, which are coordinated through masked joint attention. In addition, we combine a sparse perception paradigm with a three-stage progressive training strategy to improve spatial perception while maintaining semantic reasoning capability. Extensive experiments show that UniDriveVLA achieves state-of-the-art performance in open-loop evaluation on nuScenes and closed-loop evaluation on Bench2Drive. Moreover, it demonstrates strong performance across a broad range of perception, prediction, and understanding tasks, including 3D detection, online mapping, motion forecasting, and driving-oriented VQA, highlighting its broad applicability as a unified model for autonomous driving. Code and model have been released at https://github.com/xiaomi-research/unidrivevla