OpenEMMA: Open-Source Multimodal Model for End-to-End Autonomous Driving

📅 2024-12-19
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
End-to-end autonomous driving faces bottlenecks including high fine-tuning costs, heavy reliance on computational resources, and massive labeled data. To address these challenges, we propose the first lightweight, open-source end-to-end framework that integrates multimodal large language models (MLLMs) with chain-of-thought (CoT) reasoning to realize a learnable, vision–language joint modeling–driven driving policy network. Our key contributions are threefold: (1) a structured reasoning mechanism that enhances interpretability and robustness of driving decisions; (2) a parameter-efficient fine-tuning paradigm that drastically reduces training resource requirements; and (3) improved cross-scenario generalization, achieving superior performance over mainstream baselines across diverse complex driving tasks. All code and pre-trained models are publicly released, enabling rapid adaptation and deployment even in low-resource environments.

Technology Category

Application Category

📝 Abstract
Since the advent of Multimodal Large Language Models (MLLMs), they have made a significant impact across a wide range of real-world applications, particularly in Autonomous Driving (AD). Their ability to process complex visual data and reason about intricate driving scenarios has paved the way for a new paradigm in end-to-end AD systems. However, the progress of developing end-to-end models for AD has been slow, as existing fine-tuning methods demand substantial resources, including extensive computational power, large-scale datasets, and significant funding. Drawing inspiration from recent advancements in inference computing, we propose OpenEMMA, an open-source end-to-end framework based on MLLMs. By incorporating the Chain-of-Thought reasoning process, OpenEMMA achieves significant improvements compared to the baseline when leveraging a diverse range of MLLMs. Furthermore, OpenEMMA demonstrates effectiveness, generalizability, and robustness across a variety of challenging driving scenarios, offering a more efficient and effective approach to autonomous driving. We release all the codes in https://github.com/taco-group/OpenEMMA.
Problem

Research questions and friction points this paper is trying to address.

Develops open-source end-to-end autonomous driving model.
Enhances multimodal reasoning for complex driving scenarios.
Reduces resource demands in model fine-tuning.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Open-source multimodal model for autonomous driving
Incorporates Chain-of-Thought reasoning process
Leverages diverse Multimodal Large Language Models
🔎 Similar Papers
No similar papers found.