Vega: Learning to Drive with Natural Language Instructions

📅 2026-03-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of enabling autonomous driving systems to flexibly respond to diverse natural language instructions for personalized driving. To this end, the authors introduce InstructScene, a large-scale instruction-driven driving dataset, and propose Vega—a unified vision-language-world-action model that achieves end-to-end natural language–guided trajectory planning for the first time. Vega integrates autoregressive and diffusion paradigms: the former processes multimodal inputs (vision and language), while the latter generates future states and trajectories, with cross-modal joint attention facilitating effective alignment between modalities. Experimental results demonstrate that Vega significantly outperforms existing approaches in both trajectory planning performance and instruction-following fidelity, advancing the development of intelligent, personalized autonomous driving systems.

Technology Category

Application Category

📝 Abstract
Vision-language-action models have reshaped autonomous driving to incorporate languages into the decision-making process. However, most existing pipelines only utilize the language modality for scene descriptions or reasoning and lack the flexibility to follow diverse user instructions for personalized driving. To address this, we first construct a large-scale driving dataset (InstructScene) containing around 100,000 scenes annotated with diverse driving instructions with the corresponding trajectories. We then propose a unified Vision-Language-World-Action model, Vega, for instruction-based generation and planning. We employ the autoregressive paradigm to process visual inputs (vision) and language instructions (language) and the diffusion paradigm to generate future predictions (world modeling) and trajectories (action). We perform joint attention to enable interactions between the modalities and use individual projection layers for different modalities for more capabilities. Extensive experiments demonstrate that our method not only achieves superior planning performance but also exhibits strong instruction-following abilities, paving the way for more intelligent and personalized driving systems.
Problem

Research questions and friction points this paper is trying to address.

autonomous driving
natural language instructions
vision-language-action models
personalized driving
instruction following
Innovation

Methods, ideas, or system contributions that make the work stand out.

vision-language-action model
instruction-based driving
diffusion planning
multimodal attention
autonomous driving dataset
🔎 Similar Papers
No similar papers found.