🤖 AI Summary
This study systematically compares communication styles between large language models (LLMs) and human experts in health misinformation refutation, and examines reader preferences. Addressing the gap in understanding how LLM-generated explanations differ from human-authored ones in health contexts, we adopt a tri-dimensional framework grounded in health communication theory: linguistic features of messages, sender-level persuasive strategies, and alignment with recipient values. Methodologically, we integrate computational text analysis, expert annotation, a 99-participant double-blind user study, and a novel authoritative fact-checking dataset. Our multi-layered evaluation—first of its kind—reveals that LLM explanations significantly underperform humans in persuasiveness, certainty expression, and moral value alignment; yet over 60% of participants rated them as clearer, more comprehensive, and more persuasive, underscoring the critical role of structural coherence in reader engagement. These findings provide both a theoretical framework and empirical foundation for the trustworthy adaptation of LLMs in health communication.
📝 Abstract
With the wide adoption of large language models (LLMs) in information assistance, it is essential to examine their alignment with human communication styles and values. We situate this study within the context of fact-checking health information, given the critical challenge of rectifying conceptions and building trust. Recent studies have explored the potential of LLM for health communication, but style differences between LLMs and human experts and associated reader perceptions remain under-explored. In this light, our study evaluates the communication styles of LLMs, focusing on how their explanations differ from those of humans in three core components of health communication: information, sender, and receiver. We compiled a dataset of 1498 health misinformation explanations from authoritative fact-checking organizations and generated LLM responses to inaccurate health information. Drawing from health communication theory, we evaluate communication styles across three key dimensions of information linguistic features, sender persuasive strategies, and receiver value alignments. We further assessed human perceptions through a blinded evaluation with 99 participants. Our findings reveal that LLM-generated articles showed significantly lower scores in persuasive strategies, certainty expressions, and alignment with social values and moral foundations. However, human evaluation demonstrated a strong preference for LLM content, with over 60% responses favoring LLM articles for clarity, completeness, and persuasiveness. Our results suggest that LLMs' structured approach to presenting information may be more effective at engaging readers despite scoring lower on traditional measures of quality in fact-checking and health communication.