🤖 AI Summary
Current autonomous driving validation struggles to simultaneously achieve high test fidelity, low cost, and scalability; moreover, miniature hardware-in-the-loop (HIL) platforms lack a systematic framework for quantitative SOTIF-compliance assessment. This paper proposes a miniature mixed-reality HIL platform designed for auditable closed-loop evaluation, integrating high-precision motion capture, mixed-reality rendering, and synchronized timing control to establish a unified spatiotemporal measurement core and a three-stage SOTIF testing pipeline. The platform achieves a spatial root-mean-square error of 10.27 mm and maintains stable closed-loop latency at 45 ms. It is the first miniature HIL system to enable trigger-condition identification and quantitative characterization of performance boundaries. Empirical evaluation using Autoware demonstrates its capability to precisely localize critical performance cliffs induced by 40-ms injection delays, substantially enhancing the scientific rigor and assessment value of compact-scale validation platforms.
📝 Abstract
Validation of autonomous driving systems requires a trade-off between test fidelity, cost, and scalability. While miniaturized hardware-in-the-loop (HIL) platforms have emerged as a promising solution, a systematic framework supporting rigorous quantitative analysis is generally lacking, limiting their value as scientific evaluation tools. To address this challenge, we propose MMRHP, a miniature mixed-reality HIL platform that elevates miniaturized testing from functional demonstration to rigorous, reproducible quantitative analysis. The core contributions are threefold. First, we propose a systematic three-phase testing process oriented toward the Safety of the Intended Functionality(SOTIF)standard, providing actionable guidance for identifying the performance limits and triggering conditions of otherwise correctly functioning systems. Second, we design and implement a HIL platform centered around a unified spatiotemporal measurement core to support this process, ensuring consistent and traceable quantification of physical motion and system timing. Finally, we demonstrate the effectiveness of this solution through comprehensive experiments. The platform itself was first validated, achieving a spatial accuracy of 10.27 mm RMSE and a stable closed-loop latency baseline of approximately 45 ms. Subsequently, an in-depth Autoware case study leveraged this validated platform to quantify its performance baseline and identify a critical performance cliff at an injected latency of 40 ms. This work shows that a structured process, combined with a platform offering a unified spatio-temporal benchmark, enables reproducible, interpretable, and quantitative closed-loop evaluation of autonomous driving systems.