WOMD-Reasoning: A Large-Scale Dataset and Benchmark for Interaction and Intention Reasoning in Driving

📅 2024-07-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing driving scene understanding lacks large-scale, multimodal language datasets explicitly designed for traffic-rule-compliant and human-intention-driven long-range interaction reasoning. Method: We introduce WOMD-Reasoning—the first large-scale multimodal driving question-answering dataset (3M Q&A pairs), incorporating high-definition maps, agent trajectories, and bird’s-eye-view (BEV) images, and systematically modeling rule- and intention-triggered long-distance interactions. We further propose Motion-LLaVA, a motion-language joint reasoning architecture trained via instruction tuning and chain-of-thought prompting. Contribution/Results: Motion-LLaVA achieves significant improvements over state-of-the-art baselines on causal reasoning and intention recognition benchmarks. The dataset, code, and model are fully open-sourced, establishing a new benchmark for cognitive intelligence in autonomous driving.

Technology Category

Application Category

📝 Abstract
We propose Waymo Open Motion Dataset-Reasoning (WOMD-Reasoning), a comprehensive large-scale dataset with 3 million Q&As built on WOMD focusing on describing and reasoning interactions and intentions in driving scenarios. Existing language datasets for driving primarily capture interactions caused by close distances. However, interactions induced by traffic rules and human intentions, which can occur over long distances, are yet sufficiently covered. To address this, WOMD-Reasoning presents by far the largest multi-modal Q&A dataset on real-world driving scenarios, covering a wide range of driving topics from map descriptions and motion status descriptions to narratives and analyses of agents' interactions, behaviors, and intentions. We further introduce Motion-LLaVA, a motion-language model fine-tuned on the proposed dataset with robust interaction reasoning capabilities. We benchmark its performance across various configurations including different input modalities, reasoning techniques, and network architectures. The robust, diverse, and multi-modal nature of WOMD-Reasoning highlights its potential to advance future autonomous driving research and enable a broad range of applications. The dataset and its vision modal extension are available at https://waymo.com/open/download, and the codes&prompts to build it are available at https://github.com/yhli123/WOMD-Reasoning.
Problem

Research questions and friction points this paper is trying to address.

Lack of dedicated language datasets for traffic rule-based interaction analysis
Need for large-scale multimodal Q&A dataset on driving scenarios
Enhancing interaction reasoning and traffic rule compliance in autonomous driving
Innovation

Methods, ideas, or system contributions that make the work stand out.

Large-scale Q&A dataset for driving interactions
Multi-modal dataset with 3 million Q&As
Motion-language model fine-tuned for reasoning
🔎 Similar Papers
No similar papers found.