BATON: A Multimodal Benchmark for Bidirectional Automation Transition Observation in Naturalistic Driving

📅 2026-04-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current driving automation systems struggle to accurately predict the timing of bidirectional control transitions between human and machine, often leading to high cognitive load and safety risks. This work introduces BATON, a large-scale naturalistic driving dataset that synchronously captures multimodal data—including forward-facing and cabin videos, CAN signals, radar, and GPS—to systematically record, for the first time, the complete closed-loop process of control handover and takeover in real-world scenarios, along with three benchmark tasks. Multimodal fusion analysis reveals that predictions based solely on visual data are unreliable, whereas integrating CAN signals with route information substantially improves performance. Takeovers exhibit gradual dynamics, making them amenable to long-horizon prediction, while handovers rely more heavily on immediate contextual cues. These findings highlight an asymmetry between the two transition types in temporal dynamics and modality dependence, offering critical insights for context-aware human–machine interaction design.
📝 Abstract
Existing driving automation (DA) systems on production vehicles rely on human drivers to decide when to engage DA while requiring them to remain continuously attentive and ready to intervene. This design demands substantial situational judgment and imposes significant cognitive load, leading to steep learning curves, suboptimal user experience, and safety risks from both over-reliance and delayed takeover. Predicting when drivers hand over control to DA and when they take it back is therefore critical for designing proactive, context-aware HMI, yet existing datasets rarely capture the multimodal context, including road scene, driver state, vehicle dynamics, and route environment. To fill this gap, we introduce BATON, a large-scale naturalistic dataset capturing real-world DA usage across 127 drivers, and 136.6 hours of driving. The dataset synchronizes front-view video, in-cabin video, decoded CAN bus signals, radar-based lead-vehicle interaction, and GPS-derived route context, forming a closed-loop multimodal record around each control transition. We define three benchmark tasks: driving action understanding, handover prediction, and takeover prediction, and evaluate baselines spanning sequence models, classical classifiers, and zero-shot VLMs. Results show that visual input alone is insufficient for reliable transition prediction: front-view video captures road context but not driver state, while in-cabin video reflects driver readiness but not the external scene. Incorporating CAN and route-context signals substantially improves performance over video-only settings, indicating strong complementarity across modalities. We further find takeover events develop more gradually and benefit from longer prediction horizons, whereas handover events depend more on immediate contextual cues, revealing an asymmetry with direct implications for HMI design in assisted driving systems.
Problem

Research questions and friction points this paper is trying to address.

driving automation
control transition
multimodal context
handover prediction
takeover prediction
Innovation

Methods, ideas, or system contributions that make the work stand out.

multimodal benchmark
control transition prediction
naturalistic driving
driving automation
human-vehicle interaction
🔎 Similar Papers
No similar papers found.