🤖 AI Summary
Current social robots predominantly employ reactive capability disclosure—only clarifying limitations upon user inquiry—leading to frequent misunderstandings and reduced conversational naturalness. Method: This study conducts the first systematic comparison of proactive (preemptive, voice-based capability boundary disclosure) versus reactive disclosure strategies through three contextualized, voice-enabled user studies (N=120), grounded in human–robot collaboration paradigms and principled spoken-dialogue design. Contribution/Results: Proactive disclosure significantly improves conversational naturalness (p<0.01), interaction satisfaction (+32%), and users’ responsiveness in adjusting behavior following misunderstandings. User preference for the proactive approach reaches 87%. These findings provide empirical grounding and methodological guidance for designing explainable, trustworthy social robot interactions.
📝 Abstract
When encountering a robot in the wild, it is not inherently clear to human users what the robot's capabilities are. When encountering misunderstandings or problems in spoken interaction, robots often just apologize and move on, without additional effort to make sure the user understands what happened. We set out to compare the effect of two speech based capability communication strategies (proactive, reactive) to a robot without such a strategy, in regard to the user's rating of and their behavior during the interaction. For this, we conducted an in-person user study with 120 participants who had three speech-based interactions with a social robot in a restaurant setting. Our results suggest that users preferred the robot communicating its capabilities proactively and adjusted their behavior in those interactions, using a more conversational interaction style while also enjoying the interaction more.