🤖 AI Summary
This study investigates key determinants of user privacy perception in robot-assisted health counseling, focusing on information transparency, user control over personal data, and robot behavioral proactivity. Method: A controlled online experiment (N=200) systematically manipulated these three factors in factorial combinations to quantitatively assess users’ judgments of privacy appropriateness and trust. Contribution/Results: Neither increased transparency nor heightened robot proactivity alone significantly improved privacy trust. Only explicit, actionable user control—specifically over the explanation and sharing of personal health information—significantly enhanced both privacy appropriateness ratings (p<0.01) and trust. This is the first empirical demonstration of user control as a critical moderator in human-robot health interactions. The findings provide foundational evidence and actionable design principles for developing trustworthy, privacy-sensitive social robots in healthcare contexts.
📝 Abstract
Social robots are increasingly recognized as valuable supporters in the field of well-being coaching. They can function as independent coaches or provide support alongside human coaches, and healthcare professionals. In coaching interactions, these robots often handle sensitive information shared by users, making privacy a relevant issue. Despite this, little is known about the factors that shape users' privacy perceptions. This research aims to examine three key factors systematically: (1) the transparency about information usage, (2) the level of specific user control over how the robot uses their information, and (3) the robot's behavioral approach - whether it acts proactively or only responds on demand. Our results from an online study (N = 200) show that even when users grant the robot general access to personal data, they additionally expect the ability to explicitly control how that information is interpreted and shared during sessions. Experimental conditions that provided such control received significantly higher ratings for perceived privacy appropriateness and trust. Compared to user control, the effects of transparency and proactivity on privacy appropriateness perception were low, and we found no significant impact. The results suggest that merely informing users or proactive sharing is insufficient without accompanying user control. These insights underscore the need for further research on mechanisms that allow users to manage robots' information processing and sharing, especially when social robots take on more proactive roles alongside humans.