🤖 AI Summary
Existing autonomous driving perception evaluation metrics overlook dynamic risk disparities—such as object velocity, distance, orientation, size, and collision severity—rendering them inadequate for functional safety verification. To address this, we propose a safety-oriented environmental perception assessment framework. Our method introduces, for the first time, an interpretable single-value safety score that unifies multidimensional dynamic risk factors into a coherent model. We further design a weighted risk modeling approach coupled with multi-source data fusion, and validate the framework across real-world road scenarios and high-fidelity simulation environments. Evaluated on multiple benchmark datasets, our metric significantly outperforms conventional accuracy-based metrics (e.g., mAP), achieving strong alignment between safety scores and actual risk levels (Spearman’s ρ > 0.92). This provides a quantifiable, verifiable evaluation benchmark for functional safety certification of SAE Level 3+ automated driving systems.
📝 Abstract
Complete perception of the environment and its correct interpretation is crucial for autonomous vehicles. Object perception is the main component of automotive surround sensing. Various metrics already exist for the evaluation of object perception. However, objects can be of different importance depending on their velocity, orientation, distance, size, or the potential damage that could be caused by a collision due to a missed detection. Thus, these additional parameters have to be considered for safety evaluation. We propose a new safety metric that incorporates all these parameters and returns a single easily interpretable safety assessment score for object perception. This new metric is evaluated with both real world and virtual data sets and compared to state of the art metrics.