đ€ AI Summary
Current humanârobot interaction (HRI) research lacks a coherent, theoretically grounded understanding of trust, as established sociological trust theories remain underintegrated into robotics scholarship. Method: Through rigorous interdisciplinary dialogue between sociology and social robotics, this paper synthesizes classical trust theory with empirical HRI practice to develop the first comprehensive analytical framework for trust in humanârobot relationships. Employing sociological theory critique, controlled interaction experiments, and qualitative field studies, it proposes a three-dimensional âformationâvalidationâarticulationâ model of trust dynamics. Contribution/Results: The work transcends disciplinary silos by reconceptualizing humanârobot trust as inherently dynamic, context-dependent, and operationally measurable. It establishes foundational theoretical principles and empirically informed design guidelines, enabling robust trust assessment and targeted intervention strategies in HRI systemsâthereby bridging theoretical rigor with practical applicability.
đ Abstract
As robots find their way into more and more aspects of everyday life, questions around trust are becoming increasingly important. What does it mean to trust a robot? And how should we think about trust in relationships that involve both humans and non-human agents? While the field of Human-Robot Interaction (HRI) has made trust a central topic, the concept is often approached in fragmented ways. At the same time, established work in sociology, where trust has long been a key theme, is rarely brought into conversation with developments in robotics. This article argues that we need a more interdisciplinary approach. By drawing on insights from both social sciences and social robotics, we explore how trust is shaped, tested and made visible. Our goal is to open up a dialogue between disciplines and help build a more grounded and adaptable framework for understanding trust in the evolving world of human-robot interaction.