Do We Talk to Robots Like Therapists, and Do They Respond Accordingly? Language Alignment in AI Emotional Support

📅 2025-06-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates whether emotional support dialogues delivered by social robots align with human psychotherapy conversations in terms of thematic content and semantic response patterns. Addressing two core questions—(1) whether users disclose the same topics to robots as they do to therapists, and (2) whether robot responses are semantically proximal to therapist responses—we propose a cross-agent thematic alignment evaluation framework. Using GPT-3.5–driven QTrobot to collect dialogues, we apply Sentence-BERT embeddings, K-means clustering, and Euclidean-distance–based cluster mapping to jointly analyze user disclosure themes and system response semantics. Results show that 90.88% of robot-disclosed topics map accurately onto human therapy topic clusters, with significant semantic overlap at the response level. This work provides the first systematic, quantitative validation that emotion-support robot dialogues exhibit clinically relevant structural consistency. It establishes empirically grounded, language-based alignment evidence for trustworthy human–robot empathic interaction.

Technology Category

Application Category

📝 Abstract
As conversational agents increasingly engage in emotionally supportive dialogue, it is important to understand how closely their interactions resemble those in traditional therapy settings. This study investigates whether the concerns shared with a robot align with those shared in human-to-human (H2H) therapy sessions, and whether robot responses semantically mirror those of human therapists. We analyzed two datasets: one of interactions between users and professional therapists (Hugging Face's NLP Mental Health Conversations), and another involving supportive conversations with a social robot (QTrobot from LuxAI) powered by a large language model (LLM, GPT-3.5). Using sentence embeddings and K-means clustering, we assessed cross-agent thematic alignment by applying a distance-based cluster-fitting method that evaluates whether responses from one agent type map to clusters derived from the other, and validated it using Euclidean distances. Results showed that 90.88% of robot conversation disclosures could be mapped to clusters from the human therapy dataset, suggesting shared topical structure. For matched clusters, we compared the subjects as well as therapist and robot responses using Transformer, Word2Vec, and BERT embeddings, revealing strong semantic overlap in subjects'disclosures in both datasets, as well as in the responses given to similar human disclosure themes across agent types (robot vs. human therapist). These findings highlight both the parallels and boundaries of robot-led support conversations and their potential for augmenting mental health interventions.
Problem

Research questions and friction points this paper is trying to address.

Assessing alignment of user concerns in robot vs human therapy sessions
Comparing semantic similarity of robot and therapist responses to disclosures
Evaluating AI's potential for mental health support through language analysis
Innovation

Methods, ideas, or system contributions that make the work stand out.

Used sentence embeddings for thematic alignment
Applied K-means clustering for response mapping
Compared semantic overlap using Transformer embeddings
🔎 Similar Papers
No similar papers found.