🤖 AI Summary
Conventional HRI safety models rely solely on sensor-derived physical metrics (e.g., distance, velocity), failing to capture individual and contextual influences on subjective safety perception.
Method: We propose a personalized safety model unifying physical safety guarantees with subjective feelings of safety, introducing a learnable parameter ρ that integrates user affective state, trust level, and robot behavioral features. Using hypothesis-driven human-subject experiments in a simulated rescue scenario, we employ multimodal affect analysis, behavioral modeling, and clustering to systematically quantify safety perception mechanisms.
Contribution/Results: This work is the first to embed psychological constructs and individual differences into a formal physical safety framework. Clustering reveals stable psychobehavioral user types, enabling role- and experience-aware adaptive safety regulation. Empirical results show that consistent, controllable robot behavior significantly enhances perceived safety; role identity and repeated exposure exert dynamic modulatory effects; and ρ robustly characterizes and predicts inter-individual variations in safety perception.
📝 Abstract
Ensuring safety in human-robot interaction (HRI) is essential to foster user trust and enable the broader adoption of robotic systems. Traditional safety models primarily rely on sensor-based measures, such as relative distance and velocity, to assess physical safety. However, these models often fail to capture subjective safety perceptions, which are shaped by individual traits and contextual factors. In this paper, we introduce and analyze a parameterized general safety model that bridges the gap between physical and perceived safety by incorporating a personalization parameter, $ρ$, into the safety measurement framework to account for individual differences in safety perception. Through a series of hypothesis-driven human-subject studies in a simulated rescue scenario, we investigate how emotional state, trust, and robot behavior influence perceived safety. Our results show that $ρ$ effectively captures meaningful individual differences, driven by affective responses, trust in task consistency, and clustering into distinct user types. Specifically, our findings confirm that predictable and consistent robot behavior as well as the elicitation of positive emotional states, significantly enhance perceived safety. Moreover, responses cluster into a small number of user types, supporting adaptive personalization based on shared safety models. Notably, participant role significantly shapes safety perception, and repeated exposure reduces perceived safety for participants in the casualty role, emphasizing the impact of physical interaction and experiential change. These findings highlight the importance of adaptive, human-centered safety models that integrate both psychological and behavioral dimensions, offering a pathway toward more trustworthy and effective HRI in safety-critical domains.