When Large Language Models are Reliable for Judging Empathic Communication

📅 2025-06-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates the reliability of large language models (LLMs) in assessing empathic communication quality. Using 200 authentic supportive dialogues, we systematically compare LLMs’ judgments—under zero-shot and few-shot settings—with those of domain experts and crowdsourced annotators, grounded in four established empathy assessment frameworks from psychology, NLP, and communication studies. For the first time, inter-expert agreement serves as the gold standard, with Cohen’s κ and Krippendorff’s α employed to quantify cross-group reliability. Results show that LLMs achieve reliability levels comparable to expert consensus across all four frameworks—significantly surpassing crowdsourced annotators. This work not only validates LLMs’ capability to perform high-fidelity, affectively sensitive evaluation but also introduces the first expert-consensus–based evaluation paradigm for assessing LLMs’ empathic discrimination ability—providing both methodological grounding and deployment guidance for AI-powered mental health support tools.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) excel at generating empathic responses in text-based conversations. But, how reliably do they judge the nuances of empathic communication? We investigate this question by comparing how experts, crowdworkers, and LLMs annotate empathic communication across four evaluative frameworks drawn from psychology, natural language processing, and communications applied to 200 real-world conversations where one speaker shares a personal problem and the other offers support. Drawing on 3,150 expert annotations, 2,844 crowd annotations, and 3,150 LLM annotations, we assess inter-rater reliability between these three annotator groups. We find that expert agreement is high but varies across the frameworks' sub-components depending on their clarity, complexity, and subjectivity. We show that expert agreement offers a more informative benchmark for contextualizing LLM performance than standard classification metrics. Across all four frameworks, LLMs consistently approach this expert level benchmark and exceed the reliability of crowdworkers. These results demonstrate how LLMs, when validated on specific tasks with appropriate benchmarks, can support transparency and oversight in emotionally sensitive applications including their use as conversational companions.
Problem

Research questions and friction points this paper is trying to address.

Assessing LLM reliability in judging empathic communication nuances
Comparing expert, crowdworker, and LLM annotations on empathy frameworks
Validating LLMs for emotionally sensitive applications like companionship
Innovation

Methods, ideas, or system contributions that make the work stand out.

Compare expert, crowd, LLM annotations reliability
Use expert agreement as LLM performance benchmark
LLMs approach expert reliability in empathy judgment
🔎 Similar Papers
No similar papers found.