AgentDrive: An Open Benchmark Dataset for Agentic AI Reasoning with LLM-Generated Scenarios in Autonomous Systems

📅 2026-01-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the scarcity of large-scale, structured, and safety-critical driving scenarios in existing autonomous driving benchmarks, which hinders the training and evaluation of agent reasoning capabilities. The authors propose AgentDrive, the first structured driving scenario space built upon seven orthogonal dimensions, coupled with an open dataset of 300,000 simulation-ready scenarios generated via large language models (LLMs) and validated through physical and behavioral constraints. Complementing this is AgentDrive-MCQ, a reasoning benchmark comprising 100,000 multiple-choice questions. Leveraging Prompt-to-JSON generation, simulation rollback, and automated annotation, the framework enables systematic evaluation of perception, planning, and decision-making across multiple dimensions. Evaluation of 50 prominent LLMs reveals that closed-source models lead in strategic reasoning, while open-source counterparts are rapidly closing the gap in structured and physical reasoning. The dataset and tools are publicly released.

Technology Category

Application Category

📝 Abstract
The rapid advancement of large language models (LLMs) has sparked growing interest in their integration into autonomous systems for reasoning-driven perception, planning, and decision-making. However, evaluating and training such agentic AI models remains challenging due to the lack of large-scale, structured, and safety-critical benchmarks. This paper introduces AgentDrive, an open benchmark dataset containing 300,000 LLM-generated driving scenarios designed for training, fine-tuning, and evaluating autonomous agents under diverse conditions. AgentDrive formalizes a factorized scenario space across seven orthogonal axes: scenario type, driver behavior, environment, road layout, objective, difficulty, and traffic density. An LLM-driven prompt-to-JSON pipeline generates semantically rich, simulation-ready specifications that are validated against physical and schema constraints. Each scenario undergoes simulation rollouts, surrogate safety metric computation, and rule-based outcome labeling. To complement simulation-based evaluation, we introduce AgentDrive-MCQ, a 100,000-question multiple-choice benchmark spanning five reasoning dimensions: physics, policy, hybrid, scenario, and comparative reasoning. We conduct a large-scale evaluation of fifty leading LLMs on AgentDrive-MCQ. Results show that while proprietary frontier models perform best in contextual and policy reasoning, advanced open models are rapidly closing the gap in structured and physics-grounded reasoning. We release the AgentDrive dataset, AgentDrive-MCQ benchmark, evaluation code, and related materials at https://github.com/maferrag/AgentDrive
Problem

Research questions and friction points this paper is trying to address.

autonomous systems
agentic AI
benchmark dataset
safety-critical scenarios
LLM reasoning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Agentic AI
LLM-generated scenarios
factorized scenario space
simulation-ready benchmark
reasoning evaluation
🔎 Similar Papers
No similar papers found.