🤖 AI Summary
Existing vision-language models (VLMs) for autonomous driving suffer from high computational overhead, poor real-time deployability, and lack of native support for multi-image inputs. To address these limitations, this work proposes a lightweight, end-to-end VLM architecture. Methodologically, it (1) pioneers direct injection of multi-level 2D visual features as tokens into the language model—bypassing redundant visual encoding; (2) introduces FE-MoE, a Feature Engineering Mixture-of-Experts module that adaptively fuses cross-level visual representations; and (3) incorporates DI-Adapter, a Dynamic Instruction Adapter enabling instruction-driven vision–language alignment. The smallest variant contains only 83M parameters, yet achieves state-of-the-art performance while substantially reducing FLOPs and inference latency. Crucially, the architecture natively supports multi-camera image inputs, ensuring strong real-time capability and practical deployability in autonomous driving systems.
📝 Abstract
Vision-language models (VLMs) serve as general-purpose end-to-end models in autonomous driving, performing subtasks such as prediction, planning, and perception through question-and-answer interactions. However, most existing methods rely on computationally expensive visual encoders and large language models (LLMs), making them difficult to deploy in real-world scenarios and real-time applications. Meanwhile, most existing VLMs lack the ability to process multiple images, making it difficult to adapt to multi-camera perception in autonomous driving. To address these issues, we propose a novel framework called MiniDrive, which incorporates our proposed Feature Engineering Mixture of Experts (FE-MoE) module and Dynamic Instruction Adapter (DI-Adapter). The FE-MoE effectively maps 2D features into visual token embeddings before being input into the language model. The DI-Adapter enables the visual token embeddings to dynamically change with the instruction text embeddings, resolving the issue of static visual token embeddings for the same image in previous approaches. Compared to previous works, MiniDrive achieves state-of-the-art performance in terms of parameter size, floating point operations, and response efficiency, with the smallest version containing only 83M parameters.