LLM-based Text Simplification and its Effect on User Comprehension and Cognitive Load

πŸ“… 2025-05-04
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Low readability of domain-specific texts (e.g., PubMed, legal, financial) imposes high cognitive load and impedes comprehension. Method: We propose an LLM-based self-refinement text simplification framework that jointly optimizes readability and semantic fidelityβ€”the first to achieve such co-optimization. Our approach introduces a domain-adaptive simplification pipeline and quantifies cognitive load using the NASA-TLX scale. We conduct multi-domain randomized controlled trials involving over 4,500 participants, including a no-referencing condition (i.e., no access to original text). Results: The framework improves MCQ accuracy by an average of 3.9% (up to 14.6% on PubMed), enhances subjective usability by 0.33 points on a 5-point scale (p < 0.05), and demonstrates cross-domain robustness without requiring access to source documents. This work establishes a verifiable, generalizable methodology and empirical foundation for low-loss, LLM-driven text simplification.

Technology Category

Application Category

πŸ“ Abstract
Information on the web, such as scientific publications and Wikipedia, often surpasses users' reading level. To help address this, we used a self-refinement approach to develop a LLM capability for minimally lossy text simplification. To validate our approach, we conducted a randomized study involving 4563 participants and 31 texts spanning 6 broad subject areas: PubMed (biomedical scientific articles), biology, law, finance, literature/philosophy, and aerospace/computer science. Participants were randomized to viewing original or simplified texts in a subject area, and answered multiple-choice questions (MCQs) that tested their comprehension of the text. The participants were also asked to provide qualitative feedback such as task difficulty. Our results indicate that participants who read the simplified text answered more MCQs correctly than their counterparts who read the original text (3.9% absolute increase, p<0.05). This gain was most striking with PubMed (14.6%), while more moderate gains were observed for finance (5.5%), aerospace/computer science (3.8%) domains, and legal (3.5%). Notably, the results were robust to whether participants could refer back to the text while answering MCQs. The absolute accuracy decreased by up to ~9% for both original and simplified setups where participants could not refer back to the text, but the ~4% overall improvement persisted. Finally, participants' self-reported perceived ease based on a simplified NASA Task Load Index was greater for those who read the simplified text (absolute change on a 5-point scale 0.33, p<0.05). This randomized study, involving an order of magnitude more participants than prior works, demonstrates the potential of LLMs to make complex information easier to understand. Our work aims to enable a broader audience to better learn and make use of expert knowledge available on the web, improving information accessibility.
Problem

Research questions and friction points this paper is trying to address.

LLM-based text simplification improves user comprehension of complex content
Simplified texts reduce cognitive load compared to original versions
Enhancing accessibility of expert knowledge for broader audiences
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM-based self-refinement for text simplification
Randomized study with 4563 participants validation
Improved comprehension and reduced cognitive load
πŸ”Ž Similar Papers
No similar papers found.
T
Theo Guidroz
Google
D
Diego Ardila
Google
Jimmy Li
Jimmy Li
Google
A
Adam Mansour
Google
P
Paul Jhun
Google
N
Nina Gonzalez
Google
X
Xiang Ji
Google
M
Mike Sanchez
Google
S
Sujay S Kakarmath
Google
M
Mathias MJ Bellaiche
Google
M
Miguel 'Angel Garrido
Google
Faruk Ahmed
Faruk Ahmed
AI Scientist, Mistral
Machine LearningArtificial Intelligence
D
Divyansh Choudhary
Google
J
Jay Hartford
Google
Chenwei Xu
Chenwei Xu
Northwestern University
Deep LearningMachine Learning
H
Henry Javier Serrano Echeverria
Google
Y
Yifan Wang
Google
J
Jeff Shaffer
Google
E
Eric Cao
Google
Yossi Matias
Yossi Matias
Google
A
Avinatan Hassidim
Google
D
D. Webster
Google
Y
Yun Liu
Google
S
Sho Fujiwara
Google
P
Peggy Bui
Google
Q
Quang Duong
Google