🤖 AI Summary
This study investigates whether embodiment—specifically human interlocutors, humanoid robots, and disembodied conversational agents—affects the thematic distribution and semantic structure of self-disclosure. Using sentence embedding clustering, large language model–assisted topic labeling, and semantic similarity evaluation, we conduct the first systematic comparative analysis of self-disclosure patterns across these three embodied conditions. Results indicate no statistically significant differences in thematic distribution (p > 0.05) and high semantic consistency across conditions (mean cosine similarity ≥ 0.89), demonstrating semantic stability of self-disclosure irrespective of embodiment. These findings challenge the prevailing assumption that embodiment inherently modulates intimate interaction, suggesting instead that deep, interpersonal-style self-disclosure does not require strong physical embodiment. The work provides a cognitive foundation for social AI design, indicating that disembodied or minimally embodied agents can effectively support rich, human-like self-disclosure behaviors.
📝 Abstract
As social robots and other artificial agents become more conversationally capable, it is important to understand whether the content and meaning of self-disclosure towards these agents changes depending on the agent's embodiment. In this study, we analysed conversational data from three controlled experiments in which participants self-disclosed to a human, a humanoid social robot, and a disembodied conversational agent. Using sentence embeddings and clustering, we identified themes in participants' disclosures, which were then labelled and explained by a large language model. We subsequently assessed whether these themes and the underlying semantic structure of the disclosures varied by agent embodiment. Our findings reveal strong consistency: thematic distributions did not significantly differ across embodiments, and semantic similarity analyses showed that disclosures were expressed in highly comparable ways. These results suggest that while embodiment may influence human behaviour in human-robot and human-agent interactions, people tend to maintain a consistent thematic focus and semantic structure in their disclosures, whether speaking to humans or artificial interlocutors.