Are LLM-generated plain language summaries truly understandable? A large-scale crowdsourced evaluation

📅 2025-05-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing evaluations of Large Language Model (LLM)-generated Plain Language Summaries (PLS) in medicine rely heavily on automated metrics or subjective Likert scales, failing to capture genuine comprehension by lay readers. Method: We conducted a large-scale crowdsourced study (N=150), systematically integrating reader preference judgments, objective comprehension assessments (multiple-choice tests), and correlation analyses with ten automated metrics (e.g., BERTScore, FactScore). Contribution/Results: While LLM-generated PLS achieve surface-level quality comparable to human-authored summaries, they significantly impair comprehension and retention. Human-written PLS consistently outperform LLM-generated ones across all objective measures. Critically, none of the automated metrics reliably predict human comprehension performance. Our findings challenge the prevailing surface-quality–centric evaluation paradigm and propose a new assessment framework grounded in empirically measured “authentic understanding.” This work establishes the first evidence-based benchmark for evaluating PLS efficacy through real-world reader cognition rather than proxy metrics.

Technology Category

Application Category

📝 Abstract
Plain language summaries (PLSs) are essential for facilitating effective communication between clinicians and patients by making complex medical information easier for laypeople to understand and act upon. Large language models (LLMs) have recently shown promise in automating PLS generation, but their effectiveness in supporting health information comprehension remains unclear. Prior evaluations have generally relied on automated scores that do not measure understandability directly, or subjective Likert-scale ratings from convenience samples with limited generalizability. To address these gaps, we conducted a large-scale crowdsourced evaluation of LLM-generated PLSs using Amazon Mechanical Turk with 150 participants. We assessed PLS quality through subjective Likert-scale ratings focusing on simplicity, informativeness, coherence, and faithfulness; and objective multiple-choice comprehension and recall measures of reader understanding. Additionally, we examined the alignment between 10 automated evaluation metrics and human judgments. Our findings indicate that while LLMs can generate PLSs that appear indistinguishable from human-written ones in subjective evaluations, human-written PLSs lead to significantly better comprehension. Furthermore, automated evaluation metrics fail to reflect human judgment, calling into question their suitability for evaluating PLSs. This is the first study to systematically evaluate LLM-generated PLSs based on both reader preferences and comprehension outcomes. Our findings highlight the need for evaluation frameworks that move beyond surface-level quality and for generation methods that explicitly optimize for layperson comprehension.
Problem

Research questions and friction points this paper is trying to address.

Evaluating understandability of LLM-generated plain language summaries
Assessing alignment between automated metrics and human judgments
Improving generation methods for better layperson comprehension
Innovation

Methods, ideas, or system contributions that make the work stand out.

Large-scale crowdsourced evaluation using Amazon Mechanical Turk
Combined subjective Likert-scale ratings with objective comprehension measures
Examined alignment between automated metrics and human judgments
🔎 Similar Papers
No similar papers found.
Y
Yue Guo
School of Information Sciences, University of Illinois Urbana-Champaign, Champaign, IL
J
Jae Ho Sohn
Department of Radiology and Biomedical Imaging, University of California, San Francisco, CA
G
Gondy Leroy
Eller College of Management, University of Arizona, Tucson, AZ
Trevor Cohen
Trevor Cohen
University of Washington
Distributional SemanticsComputational LinguisticsBiomedical InformaticsInformation RetrievalLIterature-based Discovery