GPUDrive: Data-driven, multi-agent driving simulation at 1 million FPS

📅 2024-08-02
🏛️ arXiv.org
📈 Citations: 3
Influential: 0
📄 PDF
🤖 AI Summary
Multi-agent planning research is hindered by reliance on billions of simulation steps and low computational efficiency. Method: This paper introduces GPUDrive, a GPU-accelerated closed-loop multi-agent driving simulator. It pioneers the integration of heterogeneous agent behavioral modeling with low-level CUDA optimizations, achieving over one million frames per second throughput while retaining Python usability. Built upon the Madrona engine and custom CUDA C++ kernels, GPUDrive natively supports reinforcement learning (RL) frameworks and is fully compatible with the Waymo Open Motion Dataset. Contribution/Results: It enables RL training for single tasks in minutes and scales to thousands of scenarios within hours. Empirical evaluation on the Waymo dataset demonstrates efficient goal-directed driving performance. The codebase and pre-trained models are publicly released.

Technology Category

Application Category

📝 Abstract
Multi-agent learning algorithms have been successful at generating superhuman planning in various games but have had limited impact on the design of deployed multi-agent planners. A key bottleneck in applying these techniques to multi-agent planning is that they require billions of steps of experience. To enable the study of multi-agent planning at scale, we present GPUDrive. GPUDrive is a GPU-accelerated, multi-agent simulator built on top of the Madrona Game Engine capable of generating over a million simulation steps per second. Observation, reward, and dynamics functions are written directly in C++, allowing users to define complex, heterogeneous agent behaviors that are lowered to high-performance CUDA. Despite these low-level optimizations, GPUDrive is fully accessible through Python, offering a seamless and efficient workflow for multi-agent, closed-loop simulation. Using GPUDrive, we train reinforcement learning agents on the Waymo Open Motion Dataset, achieving efficient goal-reaching in minutes and scaling to thousands of scenarios in hours. We open-source the code and pre-trained agents at https://github.com/Emerge-Lab/gpudrive.
Problem

Research questions and friction points this paper is trying to address.

Enables large-scale multi-agent driving simulations.
Accelerates training with GPU-optimized, high-speed simulations.
Facilitates efficient reinforcement learning for complex agent behaviors.
Innovation

Methods, ideas, or system contributions that make the work stand out.

GPU-accelerated multi-agent simulator
C++ and CUDA for high performance
Python interface for seamless workflow
🔎 Similar Papers
No similar papers found.