🤖 AI Summary
This study investigates how robot transparency and sociability influence users’ attribution judgments and trust decisions when health-related information provided by a domestic health robot conflicts with users’ memory. A video-based experiment (2×2 between-subjects design) was conducted using the Furhat platform, simulating a medication-timing conflict scenario in multi-user households. Attribution was assessed via qualitative coding, while trust was measured quantitatively. Results reveal that robot transparency significantly increases users’ attribution of discrepancies to external interference (e.g., third-party tampering); however, 72% of participants still prioritized the robot’s recommendation—highlighting a critical trust imbalance in safety-critical healthcare contexts. Crucially, transparency alone fails to mitigate attributional bias. The study’s core contribution is the proposal of a novel “access control + explainability” co-design paradigm, empirically grounded and theoretically informed, to support equitable human–robot co-governance in high-stakes domains.
📝 Abstract
Advancements in robotic capabilities for providing physical assistance, psychological support, and daily health management are making the deployment of intelligent healthcare robots in home environments increasingly feasible in the near future. However, challenges arise when the information provided by these robots contradicts users' memory, raising concerns about user trust and decision-making. This paper presents a study that examines how varying a robot's level of transparency and sociability influences user interpretation, decision-making and perceived trust when faced with conflicting information from a robot. In a 2 x 2 between-subjects online study, 176 participants watched videos of a Furhat robot acting as a family healthcare assistant and suggesting a fictional user to take medication at a different time from that remembered by the user. Results indicate that robot transparency influenced users' interpretation of information discrepancies: with a low transparency robot, the most frequent assumption was that the user had not correctly remembered the time, while with the high transparency robot, participants were more likely to attribute the discrepancy to external factors, such as a partner or another household member modifying the robot's information. Additionally, participants exhibited a tendency toward overtrust, often prioritizing the robot's recommendations over the user's memory, even when suspecting system malfunctions or third-party interference. These findings highlight the impact of transparency mechanisms in robotic systems, the complexity and importance associated with system access control for multi-user robots deployed in home environments, and the potential risks of users' over reliance on robots in sensitive domains such as healthcare.