๐ค AI Summary
To address the challenges of motion risk prediction in autonomous drivingโs long-tail scenarios and weak model generalization due to scarcity of real high-risk data, this paper proposes the DriveMRP-Agent framework and the DriveMRP-10K synthetic dataset, introducing the first birdโs-eye-view (BEV) multi-agent joint risk modeling approach. Our method leverages trajectory projection, global contextual injection, and fine-tuned vision-language models (VLMs) to construct a VLM-agnostic risk assessment architecture, enabling unified inference of ego-vehicle, traffic-agent, and environmental risks. On synthetic data, accident identification accuracy reaches 88.03% (+60.9 percentage points); zero-shot performance on real high-risk benchmarks achieves 68.50% (+39.1 percentage points), significantly improving cross-domain generalization. Key contributions include: (1) a novel paradigm for high-risk motion data synthesis; (2) a BEV-based multi-agent joint risk modeling mechanism; and (3) a decoupled VLM architecture for risk reasoning.
๐ Abstract
Autonomous driving has seen significant progress, driven by extensive real-world data. However, in long-tail scenarios, accurately predicting the safety of the ego vehicle's future motion remains a major challenge due to uncertainties in dynamic environments and limitations in data coverage. In this work, we aim to explore whether it is possible to enhance the motion risk prediction capabilities of Vision-Language Models (VLM) by synthesizing high-risk motion data. Specifically, we introduce a Bird's-Eye View (BEV) based motion simulation method to model risks from three aspects: the ego-vehicle, other vehicles, and the environment. This allows us to synthesize plug-and-play, high-risk motion data suitable for VLM training, which we call DriveMRP-10K. Furthermore, we design a VLM-agnostic motion risk estimation framework, named DriveMRP-Agent. This framework incorporates a novel information injection strategy for global context, ego-vehicle perspective, and trajectory projection, enabling VLMs to effectively reason about the spatial relationships between motion waypoints and the environment. Extensive experiments demonstrate that by fine-tuning with DriveMRP-10K, our DriveMRP-Agent framework can significantly improve the motion risk prediction performance of multiple VLM baselines, with the accident recognition accuracy soaring from 27.13% to 88.03%. Moreover, when tested via zero-shot evaluation on an in-house real-world high-risk motion dataset, DriveMRP-Agent achieves a significant performance leap, boosting the accuracy from base_model's 29.42% to 68.50%, which showcases the strong generalization capabilities of our method in real-world scenarios.