WaymoQA: A Multi-View Visual Question Answering Dataset for Safety-Critical Reasoning in Autonomous Driving

📅 2025-11-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Autonomous driving in safety-critical scenarios faces high-level reasoning challenges—mitigating one risk often introduces another—exacerbated by the limited environmental understanding afforded by a single forward-facing view. To address this, we propose a multi-view–driven safety-critical reasoning paradigm, structured as a staged inference framework: first resolving immediate hazards, then anticipating and mitigating secondary risks. We formally define this task for the first time and introduce WaymoQA, a large-scale, multi-view, vision-language question-answering dataset (35K samples) covering both image and video modalities and diverse safety-critical questions (multiple-choice and open-ended). Leveraging multimodal large language models, we design a multi-view fusion mechanism supervised by human annotations. Experiments reveal severe deficiencies of existing models on such reasoning; however, fine-tuning on WaymoQA yields substantial performance gains, validating its critical role in developing driving agents with enhanced safety assurance and robust reasoning capabilities.

Technology Category

Application Category

📝 Abstract
Recent advancements in multimodal large language models (MLLMs) have shown strong understanding of driving scenes, drawing interest in their application to autonomous driving. However, high-level reasoning in safety-critical scenarios, where avoiding one traffic risk can create another, remains a major challenge. Such reasoning is often infeasible with only a single front view and requires a comprehensive view of the environment, which we achieve through multi-view inputs. We define Safety-Critical Reasoning as a new task that leverages multi-view inputs to address this challenge. Then, we distill Safety-Critical Reasoning into two stages: first resolve the immediate risk, then mitigate the decision-induced downstream risks. To support this, we introduce WaymoQA, a dataset of 35,000 human-annotated question-answer pairs covering complex, high-risk driving scenarios. The dataset includes multiple-choice and open-ended formats across both image and video modalities. Experiments reveal that existing MLLMs underperform in safety-critical scenarios compared to normal scenes, but fine-tuning with WaymoQA significantly improves their reasoning ability, highlighting the effectiveness of our dataset in developing safer and more reasoning-capable driving agents.
Problem

Research questions and friction points this paper is trying to address.

Addresses safety-critical reasoning in autonomous driving scenarios
Leverages multi-view inputs to resolve immediate and downstream risks
Introduces WaymoQA dataset to improve multimodal model performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-view inputs for comprehensive scene understanding
Two-stage reasoning to resolve and mitigate risks
Fine-tuning MLLMs with annotated safety-critical dataset
🔎 Similar Papers
S
Seungjun Yu
Korea Advanced Institute of Science and Technology
Seonho Lee
Seonho Lee
KAIST AI, ex-ML intern @ Snap Inc.
Computer VisionMachine LearningVision-Language ModelGenerative AI
N
Namho Kim
Hanyang University
J
Jaeyo Shin
Korea Advanced Institute of Science and Technology
Junsung Park
Junsung Park
Seoul National University
Deep LearningMulti-modal Learning
W
Wonjeong Ryu
Korea Advanced Institute of Science and Technology
R
Raehyuk Jung
Korea Advanced Institute of Science and Technology
Hyunjung Shim
Hyunjung Shim
Associate Professor, KAIST
Computer visionmachine learning