What Can Robots Teach Us About Trust and Reliance? An interdisciplinary dialogue between Social Sciences and Social Robotics

📅 2025-07-17
📈 Citations: 0
✹ Influential: 0
📄 PDF
đŸ€– AI Summary
Current human–robot interaction (HRI) research lacks a coherent, theoretically grounded understanding of trust, as established sociological trust theories remain underintegrated into robotics scholarship. Method: Through rigorous interdisciplinary dialogue between sociology and social robotics, this paper synthesizes classical trust theory with empirical HRI practice to develop the first comprehensive analytical framework for trust in human–robot relationships. Employing sociological theory critique, controlled interaction experiments, and qualitative field studies, it proposes a three-dimensional “formation–validation–articulation” model of trust dynamics. Contribution/Results: The work transcends disciplinary silos by reconceptualizing human–robot trust as inherently dynamic, context-dependent, and operationally measurable. It establishes foundational theoretical principles and empirically informed design guidelines, enabling robust trust assessment and targeted intervention strategies in HRI systems—thereby bridging theoretical rigor with practical applicability.

Technology Category

Application Category

📝 Abstract
As robots find their way into more and more aspects of everyday life, questions around trust are becoming increasingly important. What does it mean to trust a robot? And how should we think about trust in relationships that involve both humans and non-human agents? While the field of Human-Robot Interaction (HRI) has made trust a central topic, the concept is often approached in fragmented ways. At the same time, established work in sociology, where trust has long been a key theme, is rarely brought into conversation with developments in robotics. This article argues that we need a more interdisciplinary approach. By drawing on insights from both social sciences and social robotics, we explore how trust is shaped, tested and made visible. Our goal is to open up a dialogue between disciplines and help build a more grounded and adaptable framework for understanding trust in the evolving world of human-robot interaction.
Problem

Research questions and friction points this paper is trying to address.

Understanding trust in human-robot relationships
Bridging interdisciplinary gaps in trust research
Developing adaptable frameworks for HRI trust dynamics
Innovation

Methods, ideas, or system contributions that make the work stand out.

Interdisciplinary approach combining social sciences and robotics
Exploring trust formation in human-robot interactions
Building adaptable framework for trust understanding
🔎 Similar Papers
No similar papers found.