SceneDiffuser++: City-Scale Traffic Simulation via a Generative World Model

📅 2025-06-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the need for large-scale, high-fidelity city-level traffic simulation in autonomous driving testing, this paper introduces CitySim—the first end-to-end generative world model. Methodologically, it unifies scene generation, multi-agent dynamic behavior modeling, occlusion-aware perception, environmental simulation, and real-time agent insertion/removal, jointly optimized via a single loss function. It integrates diffusion models, spatiotemporal graph neural networks, and conditional generative modeling, trained end-to-end on an extended version of the Waymo Open Motion Dataset. Contributions include: (1) the first city-scale point-to-point traffic simulator capable of fully automatic, map-guided generation and dynamic control of all traffic elements—vehicles, pedestrians, and traffic signals—compatible with high-definition maps and production-grade autonomous driving stacks; and (2) significantly improved realism and long-horizon motion consistency in extended simulations (>30 seconds).

Technology Category

Application Category

📝 Abstract
The goal of traffic simulation is to augment a potentially limited amount of manually-driven miles that is available for testing and validation, with a much larger amount of simulated synthetic miles. The culmination of this vision would be a generative simulated city, where given a map of the city and an autonomous vehicle (AV) software stack, the simulator can seamlessly simulate the trip from point A to point B by populating the city around the AV and controlling all aspects of the scene, from animating the dynamic agents (e.g., vehicles, pedestrians) to controlling the traffic light states. We refer to this vision as CitySim, which requires an agglomeration of simulation technologies: scene generation to populate the initial scene, agent behavior modeling to animate the scene, occlusion reasoning, dynamic scene generation to seamlessly spawn and remove agents, and environment simulation for factors such as traffic lights. While some key technologies have been separately studied in various works, others such as dynamic scene generation and environment simulation have received less attention in the research community. We propose SceneDiffuser++, the first end-to-end generative world model trained on a single loss function capable of point A-to-B simulation on a city scale integrating all the requirements above. We demonstrate the city-scale traffic simulation capability of SceneDiffuser++ and study its superior realism under long simulation conditions. We evaluate the simulation quality on an augmented version of the Waymo Open Motion Dataset (WOMD) with larger map regions to support trip-level simulation.
Problem

Research questions and friction points this paper is trying to address.

Develops generative city-scale traffic simulation for autonomous vehicle testing
Integrates dynamic scene generation and environment simulation technologies
Enhances realism in long-duration simulations using a unified model
Innovation

Methods, ideas, or system contributions that make the work stand out.

End-to-end generative world model for city-scale simulation
Single loss function integrates all simulation requirements
Dynamic scene generation with occlusion and environment control
🔎 Similar Papers
No similar papers found.