🤖 AI Summary
Existing evaluations of Large Language Model (LLM)-generated Plain Language Summaries (PLS) in medicine rely heavily on automated metrics or subjective Likert scales, failing to capture genuine comprehension by lay readers.
Method: We conducted a large-scale crowdsourced study (N=150), systematically integrating reader preference judgments, objective comprehension assessments (multiple-choice tests), and correlation analyses with ten automated metrics (e.g., BERTScore, FactScore).
Contribution/Results: While LLM-generated PLS achieve surface-level quality comparable to human-authored summaries, they significantly impair comprehension and retention. Human-written PLS consistently outperform LLM-generated ones across all objective measures. Critically, none of the automated metrics reliably predict human comprehension performance. Our findings challenge the prevailing surface-quality–centric evaluation paradigm and propose a new assessment framework grounded in empirically measured “authentic understanding.” This work establishes the first evidence-based benchmark for evaluating PLS efficacy through real-world reader cognition rather than proxy metrics.
📝 Abstract
Plain language summaries (PLSs) are essential for facilitating effective communication between clinicians and patients by making complex medical information easier for laypeople to understand and act upon. Large language models (LLMs) have recently shown promise in automating PLS generation, but their effectiveness in supporting health information comprehension remains unclear. Prior evaluations have generally relied on automated scores that do not measure understandability directly, or subjective Likert-scale ratings from convenience samples with limited generalizability. To address these gaps, we conducted a large-scale crowdsourced evaluation of LLM-generated PLSs using Amazon Mechanical Turk with 150 participants. We assessed PLS quality through subjective Likert-scale ratings focusing on simplicity, informativeness, coherence, and faithfulness; and objective multiple-choice comprehension and recall measures of reader understanding. Additionally, we examined the alignment between 10 automated evaluation metrics and human judgments. Our findings indicate that while LLMs can generate PLSs that appear indistinguishable from human-written ones in subjective evaluations, human-written PLSs lead to significantly better comprehension. Furthermore, automated evaluation metrics fail to reflect human judgment, calling into question their suitability for evaluating PLSs. This is the first study to systematically evaluate LLM-generated PLSs based on both reader preferences and comprehension outcomes. Our findings highlight the need for evaluation frameworks that move beyond surface-level quality and for generation methods that explicitly optimize for layperson comprehension.