🤖 AI Summary
The relationship between “Meaningful Human Oversight and Control” (MHOC) and users’ perceived safety and trust in partially automated driving remains unclear—particularly regarding the critical MHOC condition of “traceability” (i.e., real-time alignment of system behavior with driver intent), which lacks empirical validation.
Method: Drawing on in-depth interviews with Tesla FSD Beta users, this study applies semantic coding and thematic analysis within an MHOC theoretical framework.
Contribution/Results: It is the first to empirically identify a nonlinear relationship between traceability and subjective trust/safety: traceability failure in hazardous scenarios invariably triggers trust collapse and acute insecurity, whereas such failures during routine operations exert limited impact. These findings establish traceability as the core mediating mechanism through which MHOC shapes subjective experience, providing critical empirical grounding for responsibility attribution design and human–machine cooperative optimization in partial automation.
📝 Abstract
The use of partially automated driving systems raises concerns about potential responsibility issues, posing risk to the system safety, acceptance, and adoption of these technologies. The concept of meaningful human control has emerged in response to the responsibility gap problem, requiring the fulfillment of two conditions, tracking and tracing. While this concept has provided important philosophical and design insights on automated driving systems, there is currently little knowledge on how meaningful human control relates to subjective experiences of actual users of these systems. To address this gap, our study aimed to investigate the alignment between the degree of meaningful human control and drivers' perceptions of safety and trust in a real-world partially automated driving system. We utilized previously collected data from interviews with Tesla"Full Self-Driving"(FSD) Beta users, investigating the alignment between the user perception and how well the system was tracking the users' reasons. We found that tracking of users' reasons for driving tasks (such as safe maneuvers) correlated with perceived safety and trust, albeit with notable exceptions. Surprisingly, failure to track lane changing and braking reasons was not necessarily associated with negative perceptions of safety. However, the failure of the system to track expected maneuvers in dangerous situations always resulted in low trust and perceived lack of safety. Overall, our analyses highlight alignment points but also possible discrepancies between perceived safety and trust on the one hand, and meaningful human control on the other hand. Our results can help the developers of automated driving technology to design systems under meaningful human control and are perceived as safe and trustworthy.