Towards Human-Centric Evaluation of Interaction-Aware Automated Vehicle Controllers: A Framework and Case Study

📅 2025-08-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current autonomous driving evaluation predominantly focuses on technical metrics (e.g., collision avoidance, lane-keeping), neglecting human drivers’ interactive experience. To address this gap, we propose the first human-centered, four-dimensional evaluation framework—comprising interaction impact, perceptual predictability, operational effort, and interaction capability—grounded in human–machine interaction theory and validated via high-fidelity driving simulation experiments. We empirically assess diverse controllers in highway on-ramp merging scenarios using multimodal measures: subjective ratings, behavioral analysis, cognitive workload assessment (e.g., NASA-TLX), and quantitative interaction outcomes. Results demonstrate that our framework effectively uncovers how controller design influences driver trust, perceived predictability, and cognitive load—revealing critical limitations of conventional metrics. This work establishes a novel paradigm for developing safer, more acceptable, and human-factor–informed autonomous driving behaviors.

Technology Category

Application Category

📝 Abstract
As automated vehicles (AVs) increasingly integrate into mixed-traffic environments, evaluating their interaction with human-driven vehicles (HDVs) becomes critical. In most research focused on developing new AV control algorithms (controllers), the performance of these algorithms is assessed solely based on performance metrics such as collision avoidance or lane-keeping efficiency, while largely overlooking the human-centred dimensions of interaction with HDVs. This paper proposes a structured evaluation framework that addresses this gap by incorporating metrics grounded in the human-robot interaction literature. The framework spans four key domains: a) interaction effect, b) interaction perception, c) interaction effort, and d) interaction ability. These domains capture both the performance of the AV and its impact on human drivers around it. To demonstrate the utility of the framework, we apply it to a case study evaluating how a state-of-the-art AV controller interacts with human drivers in a merging scenario in a driving simulator. Measuring HDV-HDV interactions as a baseline, this study included one representative metric per domain: a) perceived safety, b) subjective ratings, specifically how participants perceived the other vehicle's driving behaviour (e.g., aggressiveness or predictability) , c) driver workload, and d) merging success. The results showed that incorporating metrics covering all four domains in the evaluation of AV controllers can illuminate critical differences in driver experience when interacting with AVs. This highlights the need for a more comprehensive evaluation approach. Our framework offers researchers, developers, and policymakers a systematic method for assessing AV behaviour beyond technical performance, fostering the development of AVs that are not only functionally capable but also understandable, acceptable, and safe from a human perspective.
Problem

Research questions and friction points this paper is trying to address.

Evaluating AV-HDV interaction lacks human-centered metrics
Proposing framework for AV assessment with human-robot interaction criteria
Measuring driver experience in AV interactions beyond technical performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Human-centric AV evaluation framework
Incorporates HRI-based interaction metrics
Four-domain structured assessment approach
🔎 Similar Papers
No similar papers found.