🤖 AI Summary
This study investigates how the perceived explainability of large language model–driven voice assistants varies with their level of agreeableness among older adults, particularly across everyday versus emergency home scenarios. In an experiment with 70 older participants, we manipulated assistant agreeableness (high vs. low) and compared real-time environmental explanations against retrospective historical ones. Results indicate that highly agreeable assistants are trusted and preferred more in routine contexts, yet clarity supersedes warmth in emergencies. Real-time explanations consistently outperformed retrospective ones, and participants high in trait agreeableness rated low-agreeableness assistants significantly more negatively. These findings underscore the need for personalized explainability strategies that account for user personality, situational context, and individual characteristics, while also revealing that social tone and perceived competence operate as distinct dimensions—challenging the efficacy of one-size-fits-all explanation approaches.
📝 Abstract
LLM-based voice assistants (VAs) increasingly support older adults aging in place, yet how an assistant's agreeableness shapes explanation perception remains underexplored. We conducted a study(N=70) examining how VA agreeableness influences older adults' perceptions of explanations across routine and emergency home scenarios. High-agreeableness assistants were perceived as more trustworthy, empathetic, and likable, but these benefits diminished in emergencies where clarity outweighed warmth. Agreeableness did not affect perceived intelligence, suggesting social tone and competence are separable dimensions. Real-time environmental explanations outperformed history-based ones, and agreeable older adults penalized low-agreeableness assistants more strongly. These findings show the need to move beyond a one-size-fits-all approach to AI explainability, while balancing personality, context, and audience.