🤖 AI Summary
This work addresses the challenge faced by current vision-language models (VLMs) in autonomous driving, where balancing high-level semantic reasoning with precise motion control often entails a trade-off between model scale and planning accuracy. To overcome this limitation, the authors propose a decoupled framework that explicitly separates reasoning from control for the first time: a large-scale vision-language model (Navigator) handles semantic understanding, while a lightweight, trainable driving module (Driver) executes motion planning. This design preserves strong semantic capabilities while significantly reducing training costs and yielding interpretable intermediate representations. Evaluated on the nuScenes end-to-end motion planning benchmark, the proposed approach outperforms existing large VLM-based baselines.
📝 Abstract
Vision-language models (VLMs) have emerged as a promising direction for end-to-end autonomous driving (AD) by jointly modeling visual observations, driving context, and language-based reasoning. However, existing VLM-based systems face a trade-off between high-level reasoning and motion planning: large models offer strong semantic understanding but are costly to adapt for precise control, whereas small VLM models can be fine-tuned efficiently but often exhibit weaker reasoning. We propose NaviDriveVLM, a decoupled framework that separates reasoning from action generation using a large-scale Navigator and a lightweight trainable Driver. This design preserves reasoning ability, reduces training cost, and provides an explicit interpretable intermediate representation for downstream planning. Experiments on the nuScenes benchmark show that NaviDriveVLM outperforms large VLM baselines in end-to-end motion planning.