WcDT: World-centric Diffusion Transformer for Traffic Scene Generation

📅 2024-04-02
🏛️ arXiv.org
📈 Citations: 3
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the insufficient realism and diversity of multi-agent future trajectory generation in autonomous driving. We propose a world-centric diffusion-transformer framework. Methodologically, we introduce the first integration of Denoising Diffusion Probabilistic Models (DDPM) and Diffusion Transformers (DiT) to construct an “Agent Move Statement” representation; design a world-centric feature fusion mechanism that jointly models historical trajectories, high-definition maps, and traffic signals in a global context; and employ a multi-source feature Transformer encoder coupled with a conditional trajectory decoder for end-to-end, multimodal trajectory generation. Evaluated on the nuScenes benchmark, our approach achieves significant improvements in trajectory diversity (23.6% reduction in Minimum Recall Distance, MRD) and realism (18.4% reduction in Fréchet Inception Distance, FID). The generated trajectories have been deployed in an autonomous driving closed-loop simulation system.

Technology Category

Application Category

📝 Abstract
In this paper, we introduce a novel approach for autonomous driving trajectory generation by harnessing the complementary strengths of diffusion probabilistic models (a.k.a., diffusion models) and transformers. Our proposed framework, termed the"World-Centric Diffusion Transformer"(WcDT), optimizes the entire trajectory generation process, from feature extraction to model inference. To enhance the scene diversity and stochasticity, the historical trajectory data is first preprocessed into"Agent Move Statement"and encoded into latent space using Denoising Diffusion Probabilistic Models (DDPM) enhanced with Diffusion with Transformer (DiT) blocks. Then, the latent features, historical trajectories, HD map features, and historical traffic signal information are fused with various transformer-based encoders that are used to enhance the interaction of agents with other elements in the traffic scene. The encoded traffic scenes are then decoded by a trajectory decoder to generate multimodal future trajectories. Comprehensive experimental results show that the proposed approach exhibits superior performance in generating both realistic and diverse trajectories, showing its potential for integration into automatic driving simulation systems. Our code is available at url{https://github.com/yangchen1997/WcDT}.
Problem

Research questions and friction points this paper is trying to address.

Generates realistic and diverse autonomous driving trajectories.
Enhances scene diversity using diffusion models and transformers.
Integrates historical data and HD maps for accurate trajectory prediction.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Combines diffusion models with transformers
Enhances scene diversity using DDPM and DiT
Fuses multiple features via transformer encoders
🔎 Similar Papers
No similar papers found.
C
Chen Yang
Department of Computer Science and Informatics, Cardiff University, Cardiff, UK
A
Aaron Xuxiang Tian
Information Networking Institute, Carnegie Mellon University, Pittsburgh, PA, 15213, USA
D
Dong Chen
Environmental Institute & Link Lab & Computer Science, University of Virginia, Charlottesville, VA, 22903, USA
Tianyu Shi
Tianyu Shi
University of Toronto
Reinforcement learningIntelligent Transportation SystemLarge Language ModelsAILLM agent
Arsalan Heydarian
Arsalan Heydarian
Associate Professor at University of Virginia
Intelligent Built EnvironmentsUser-centered DesignBehavioral ModelingVirtual and Augmented RealityAutomation in Construc