🤖 AI Summary
This study investigates how anthropomorphic features of chatbots—specifically human identity, emotional expression, and nonverbal cues—influence users’ prosocial behavior and helping intentions toward robots by eliciting empathic responses. Method: A controlled online experiment (N = 244) manipulated how a robot explained errors during a collaborative task; behavioral measures (e.g., actual help provided) and qualitative self-reports were integrated to test causal mediation. Results: Robots exhibiting human identity and emotional expression significantly increased users’ empathy, which in turn enhanced both observed helping behavior and self-reported helping intentions; empathy served as a critical mediating mechanism. This work provides the first systematic empirical validation of the Computers Are Social Actors (CASA) framework in promoting human prosociality toward machines. It uncovers the psychological pathway through which anthropomorphism shapes human–robot collaboration and offers theoretical grounding and practical design implications for trustworthy human–AI interaction.
📝 Abstract
Chatbots are increasingly integrated into people's lives and are widely used to help people. Recently, there has also been growing interest in the reverse direction-humans help chatbots-due to a wide range of benefits including better chatbot performance, human well-being, and collaborative outcomes. However, little research has explored the factors that motivate people to help chatbots. To address this gap, we draw on the Computers Are Social Actors (CASA) framework to examine how chatbot anthropomorphism-including human-like identity, emotional expression, and non-verbal expression-influences human empathy toward chatbots and their subsequent prosocial behaviors and intentions. We also explore people's own interpretations of their prosocial behaviors toward chatbots. We conducted an online experiment (N = 244) in which chatbots made mistakes in a collaborative image labeling task and explained the reasons to participants. We then measured participants' prosocial behaviors and intentions toward the chatbots. Our findings revealed that human identity and emotional expression of chatbots increased participants' prosocial behavior and intention toward chatbots, with empathy mediating these effects. Qualitative analysis further identified two motivations for participants' prosocial behaviors: empathy for the chatbot and perceiving the chatbot as human-like. We discuss the implications of these results for understanding and promoting human prosocial behaviors toward chatbots.