HAD-Gen: Human-like and Diverse Driving Behavior Modeling for Controllable Scenario Generation

📅 2025-03-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the limitations of existing human driving behavior models in autonomous driving simulation—namely, insufficient diversity and inadequate safety characterization—this paper proposes a controllable traffic scenario generation framework. Our method innovatively integrates safety-feature-driven trajectory clustering with maximum entropy inverse reinforcement learning (MaxEnt IRL) to decouple driving styles along interpretable dimensions; it further enhances policy generalizability via offline pretraining and multi-agent reinforcement learning. In comprehensive benchmarking, our approach achieves a task success rate of 90.96%—a >20% improvement over prior methods—while maintaining low boundary violation (2.08%) and collision rates (6.91%). Notably, this work is the first to explicitly incorporate safety features into driving style disentanglement, significantly improving behavioral authenticity, diversity, and controllability. The proposed framework establishes a new paradigm for high-fidelity traffic simulation.

Technology Category

Application Category

📝 Abstract
Simulation-based testing has emerged as an essential tool for verifying and validating autonomous vehicles (AVs). However, contemporary methodologies, such as deterministic and imitation learning-based driver models, struggle to capture the variability of human-like driving behavior. Given these challenges, we propose HAD-Gen, a general framework for realistic traffic scenario generation that simulates diverse human-like driving behaviors. The framework first clusters the vehicle trajectory data into different driving styles according to safety features. It then employs maximum entropy inverse reinforcement learning on each of the clusters to learn the reward function corresponding to each driving style. Using these reward functions, the method integrates offline reinforcement learning pre-training and multi-agent reinforcement learning algorithms to obtain general and robust driving policies. Multi-perspective simulation results show that our proposed scenario generation framework can simulate diverse, human-like driving behaviors with strong generalization capability. The proposed framework achieves a 90.96% goal-reaching rate, an off-road rate of 2.08%, and a collision rate of 6.91% in the generalization test, outperforming prior approaches by over 20% in goal-reaching performance. The source code is released at https://github.com/RoboSafe-Lab/Sim4AD.
Problem

Research questions and friction points this paper is trying to address.

Simulating diverse human-like driving behaviors for autonomous vehicle testing.
Overcoming limitations of deterministic and imitation learning-based driver models.
Generating realistic traffic scenarios with strong generalization capability.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Clusters trajectory data by driving styles
Uses maximum entropy inverse reinforcement learning
Integrates offline and multi-agent reinforcement learning
🔎 Similar Papers
No similar papers found.
C
Cheng Wang
School of Engineering and Physical Sciences, Heriot-Watt University, Edinburgh EH14 4AS, United Kingdom
L
Lingxin Kong
School of Automation and Software Engineering, Shanxi University, Taiyuan 030031, China
M
Massimiliano Tamborski
School of Informatics, University of Edinburgh, Edinburgh EH8 9AB, United Kingdom
Stefano V. Albrecht
Stefano V. Albrecht
School of Informatics, University of Edinburgh
Artificial IntelligenceAutonomous AgentsMulti-Agent SystemsReinforcement Learning