🤖 AI Summary
To address two critical bottlenecks in LLM-driven autonomous driving—weak visual representation and model redundancy—this paper proposes a vision-enhanced lightweight multimodal large language model (MLLM). Methodologically, we introduce three novel mechanisms: (1) cycle-consistent dynamic visual token pruning, (2) memory-augmented feature aggregation, and (3) distance-decoupled instruction attention, enabling efficient visual token compression and long-range vision–language joint modeling. Evaluated end-to-end in CARLA under closed-loop settings, our model reduces parameters from 7B to 1.3B (an 81% reduction) while improving driving scores by 15.4%, 16.8%, and 7.6% for short-, medium-, and long-range scenarios, respectively. These gains demonstrate substantial improvements in perceptual robustness and deployment feasibility for real-world autonomous driving systems.
📝 Abstract
Recent advancements in language-grounded autonomous driving have been significantly promoted by the sophisticated cognition and reasoning capabilities of large language models (LLMs). However, current LLM-based approaches encounter critical challenges: (1) Failure analysis reveals that frequent collisions and obstructions, stemming from limitations in visual representations, remain primary obstacles to robust driving performance. (2) The substantial parameters of LLMs pose considerable deployment hurdles. To address these limitations, we introduce VLDrive, a novel approach featuring a lightweight MLLM architecture with enhanced vision components. VLDrive achieves compact visual tokens through innovative strategies, including cycle-consistent dynamic visual pruning and memory-enhanced feature aggregation. Furthermore, we propose a distance-decoupled instruction attention mechanism to improve joint visual-linguistic feature learning, particularly for long-range visual tokens. Extensive experiments conducted in the CARLA simulator demonstrate VLDrive`s effectiveness. Notably, VLDrive achieves state-of-the-art driving performance while reducing parameters by 81% (from 7B to 1.3B), yielding substantial driving score improvements of 15.4%, 16.8%, and 7.6% at tiny, short, and long distances, respectively, in closed-loop evaluations. Code is available at https://github.com/ReaFly/VLDrive.