🤖 AI Summary
This work addresses the challenge of unifying perception, planning, and mapping in autonomous driving through a single end-to-end multimodal foundation model. We propose EMMA, an End-to-end Multimodal Model for Autonomous driving, which jointly performs motion prediction, 3D object detection, and vectorized road graph modeling directly from raw camera inputs. Its core innovation lies in fully tokenizing both sensor observations and driving outputs into a shared language space—enabling unified representation without intermediate modules or hand-crafted features—and leveraging a multimodal large language model (MLLM) architecture driven by task-specific prompts for joint generation. This enables end-to-end co-training across perception, planning, and mapping. EMMA achieves state-of-the-art performance on nuScenes motion prediction, and attains leading pure-vision 3D detection and motion forecasting results on Waymo Open Motion and Waymo Open Dataset, demonstrating that language-space unification significantly enhances generalization across autonomous driving tasks.
📝 Abstract
We introduce EMMA, an End-to-end Multimodal Model for Autonomous driving. Built on a multi-modal large language model foundation, EMMA directly maps raw camera sensor data into various driving-specific outputs, including planner trajectories, perception objects, and road graph elements. EMMA maximizes the utility of world knowledge from the pre-trained large language models, by representing all non-sensor inputs (e.g. navigation instructions and ego vehicle status) and outputs (e.g. trajectories and 3D locations) as natural language text. This approach allows EMMA to jointly process various driving tasks in a unified language space, and generate the outputs for each task using task-specific prompts. Empirically, we demonstrate EMMA's effectiveness by achieving state-of-the-art performance in motion planning on nuScenes as well as competitive results on the Waymo Open Motion Dataset (WOMD). EMMA also yields competitive results for camera-primary 3D object detection on the Waymo Open Dataset (WOD). We show that co-training EMMA with planner trajectories, object detection, and road graph tasks yields improvements across all three domains, highlighting EMMA's potential as a generalist model for autonomous driving applications. However, EMMA also exhibits certain limitations: it can process only a small amount of image frames, does not incorporate accurate 3D sensing modalities like LiDAR or radar and is computationally expensive. We hope that our results will inspire further research to mitigate these issues and to further evolve the state of the art in autonomous driving model architectures.