🤖 AI Summary
To address the challenge of rendering robotic attention states intuitively perceivable to humans in human–robot collaboration, this paper proposes a mirror-visual-feedback-based robotic eye interface. The method employs a screen-rendered 3D eyeball model that dynamically mirrors and overlays the physical-space gaze target onto the robot’s “eye region,” explicitly revealing its visual focus without requiring auxiliary explanation. This work represents the first integration of mirror visual feedback into a robotic eye system, unifying spatial gaze estimation, real-time rendering, and coordinated head motion control on a mobile platform. A user study demonstrates that enabling this design significantly enhances participants’ sensitivity to the robot’s information-processing state: average misidentification latency decreases by 23%, and both interaction interpretability and subjective experience scores improve significantly (p < 0.01).
📝 Abstract
The gaze of a person tends to reflect their interest. This work explores what happens when this statement is taken literally and applied to robots. Here we present a robot system that employs a moving robot head with a screen-based eye model that can direct the robot's gaze to points in physical space and present a reflection-like mirror image of the attended region on top of each eye. We conducted a user study with 33 participants, who were asked to instruct the robot to perform pick-and-place tasks, monitor the robot's task execution, and interrupt it in case of erroneous actions. Despite a deliberate lack of instructions about the role of the eyes and a very brief system exposure, participants felt more aware about the robot's information processing, detected erroneous actions earlier, and rated the user experience higher when eye-based mirroring was enabled compared to non-reflective eyes. These results suggest a beneficial and intuitive utilization of the introduced method in cooperative human-robot interaction.