🤖 AI Summary
Current large vision-language models (LVLMs) for autonomous driving are constrained by single front-view input and purely 2D scene understanding, limiting multi-view holistic interaction and 3D spatial-semantic alignment. To address this, we propose DriveMonkey: (1) the first large-scale multi-view vision-language dataset, NuInteract, containing 1.5 million image-text pairs; (2) a plug-and-play architecture enabling query-driven collaboration between LVLMs and a 3D spatial processor—integrating learnable queries with a spatial modeling module initialized from a pre-trained 3D detector; and (3) support for multi-view image-language joint reasoning and instruction-guided 3D grounding. Experiments demonstrate that DriveMonkey achieves a 9.86% relative improvement over general-purpose LVLMs on 3D visual localization tasks, significantly enhancing spatial reasoning and instruction-following capabilities in complex traffic scenarios.
📝 Abstract
The Large Visual-Language Models (LVLMs) have significantly advanced image understanding. Their comprehension and reasoning capabilities enable promising applications in autonomous driving scenarios. However, existing research typically focuses on front-view perspectives and partial objects within scenes, struggling to achieve comprehensive scene understanding. Meanwhile, existing LVLMs suffer from the lack of mapping relationship between 2D and 3D and insufficient integration of 3D object localization and instruction understanding. To tackle these limitations, we first introduce NuInteract, a large-scale dataset with over 1.5M multi-view image language pairs spanning dense scene captions and diverse interactive tasks. Furthermore, we propose DriveMonkey, a simple yet effective framework that seamlessly integrates LVLMs with a spatial processor using a series of learnable queries. The spatial processor, designed as a plug-and-play component, can be initialized with pre-trained 3D detectors to improve 3D perception. Our experiments show that DriveMonkey outperforms general LVLMs, especially achieving a 9.86% notable improvement on the 3D visual grounding task. The dataset and code will be released at https://github.com/zc-zhao/DriveMonkey.