Imagine-2-Drive: Leveraging High-Fidelity World Models via Multi-Modal Diffusion Policies

📅 2024-11-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address long-horizon policy degradation caused by prediction distortion in world models and the inability of conventional unimodal policies to capture the multimodal nature of driving decisions, this paper proposes an end-to-end high-fidelity world model jointly optimized with a multimodal diffusion policy. The method introduces (1) DiffDreamer, a synchronous generative diffusion-based world model that mitigates error accumulation via multi-step joint diffusion modeling; and (2) the Diffusion Policy Actor (DPA), a multimodal policy that learns trajectory distributions in latent space and enables closed-loop policy training. Evaluated on the CARLA benchmark, the framework achieves a 15% improvement in Route Completion and a 20% gain in Success Rate over state-of-the-art world-model-based reinforcement learning (WMRL) approaches, demonstrating superior fidelity and decision diversity.

Technology Category

Application Category

📝 Abstract
World Model-based Reinforcement Learning (WMRL) enables sample efficient policy learning by reducing the need for online interactions which can potentially be costly and unsafe, especially for autonomous driving. However, existing world models often suffer from low prediction fidelity and compounding one-step errors, leading to policy degradation over long horizons. Additionally, traditional RL policies, often deterministic or single Gaussian-based, fail to capture the multi-modal nature of decision-making in complex driving scenarios. To address these challenges, we propose Imagine-2-Drive, a novel WMRL framework that integrates a high-fidelity world model with a multi-modal diffusion-based policy actor. It consists of two key components: DiffDreamer, a diffusion-based world model that generates future observations simultaneously, mitigating error accumulation, and DPA (Diffusion Policy Actor), a diffusion-based policy that models diverse and multi-modal trajectory distributions. By training DPA within DiffDreamer, our method enables robust policy learning with minimal online interactions. We evaluate our method in CARLA using standard driving benchmarks and demonstrate that it outperforms prior world model baselines, improving Route Completion and Success Rate by 15% and 20% respectively.
Problem

Research questions and friction points this paper is trying to address.

Improves prediction fidelity in world models for autonomous driving.
Addresses multi-modal decision-making in complex driving scenarios.
Reduces online interactions for safer and more efficient policy learning.
Innovation

Methods, ideas, or system contributions that make the work stand out.

High-fidelity world model reduces error accumulation
Multi-modal diffusion policy captures diverse decisions
Minimizes online interactions for robust policy learning
A
Anant Garg
The International Institute of Information Technology, Hyderabad
K Madhava Krishna
K Madhava Krishna
Professor, IIIT Hyderabad
RoboticsComputer Vision