🤖 AI Summary
This study investigates how robot embodiment (physical robot vs. voice-only chatbot) and large language model (LLM) empathic prosody jointly influence third-party empathy induction and authentic helping behavior (volunteer hours). Method: A controlled experiment (N=60) was conducted using a service robot platform integrated with an LLM dialogue system, employing standardized empathy interaction protocols, dual-method measurement—behavioral (actual volunteer duration) and subjective (validated self-report scales)—while orthogonally manipulating embodiment and LLM prosodic empathy. Contribution/Results: Neither embodiment nor empathic prosody significantly enhanced helping intentions or behaviors. Although the LLM generated semantically plausible empathic utterances, it failed to elicit genuine empathic responses or behavioral translation in users. These findings provide critical empirical evidence on the boundaries of embodied AI for empathy interventions and reveal a dissociation between semantic empathy expression and behavioral empathy elicitation.
📝 Abstract
This study investigates the elicitation of empathy toward a third party through interaction with social agents. Participants engaged with either a physical robot or a voice-enabled chatbot, both driven by a large language model (LLM) programmed to exhibit either an empathetic tone or remain neutral. The interaction is focused on a fictional character, Katie Banks, who is in a challenging situation and in need of financial donations. The willingness to help Katie, measured by the number of hours participants were willing to volunteer, along with their perceptions of the agent, were assessed for 60 participants. Results indicate that neither robotic embodiment nor empathetic tone significantly influenced participants' willingness to volunteer. While the LLM effectively simulated human empathy, fostering genuine empathetic responses in participants proved challenging.