🤖 AI Summary
This study addresses the high manual cost and scarcity of annotated data in generating Easy-to-Read (ETR) texts for people with cognitive disabilities. We propose an automated ETR generation framework integrating Retrieval-Augmented Generation (RAG), Multi-Task Learning (MTL), and Low-Rank Adaptation (LoRA). Our method jointly models summarization, text simplification, and readability transformation tasks, fine-tuning Mistral-7B and LLaMA-3-8B via MTL-LoRA while leveraging RAG to alleviate domain generalization bottlenecks. Experiments on the high-quality French ETR-fr benchmark demonstrate that: (1) the multi-task approach consistently outperforms single-task baselines; (2) RAG substantially improves cross-domain generalization; and (3) MTL-LoRA achieves state-of-the-art performance in in-domain settings. The framework delivers a scalable, robust, LLM-driven solution for equitable information access in resource-constrained scenarios.
📝 Abstract
Simplifying complex texts is essential for ensuring equitable access to information, especially for individuals with cognitive impairments. The Easy-to-Read (ETR) initiative offers a framework for making content accessible to the neurodivergent population, but the manual creation of such texts remains time-consuming and resource-intensive. In this work, we investigate the potential of large language models (LLMs) to automate the generation of ETR content. To address the scarcity of aligned corpora and the specificity of ETR constraints, we propose a multi-task learning (MTL) approach that trains models jointly on text summarization, text simplification, and ETR generation. We explore two different strategies: multi-task retrieval-augmented generation (RAG) for in-context learning, and MTL-LoRA for parameter-efficient fine-tuning. Our experiments with Mistral-7B and LLaMA-3-8B, based on ETR-fr, a new high-quality dataset, demonstrate the benefits of multi-task setups over single-task baselines across all configurations. Moreover, results show that the RAG-based strategy enables generalization in out-of-domain settings, while MTL-LoRA outperforms all learning strategies within in-domain configurations.