🤖 AI Summary
This study addresses sentence simplification for English as a second language (L2) learners, proposing a CEFR-aligned approach that jointly optimizes simplification quality and target lexical coverage. Unlike conventional readability-driven paradigms, it is the first to explicitly orient simplification toward language acquisition support—achieving level appropriateness and pedagogical lexical value without parallel corpora. Methodologically, we introduce a large language model–based reinforcement learning framework with a dual-granularity reward mechanism: token-level rewards maximize frequency and diversity of target CEFR vocabulary, while sentence-level rewards preserve fluency and semantic fidelity; self-iterative self-training further refines the policy. Experiments on CEFR-SP and TurkCorpus demonstrate over 20% improvement in target-level lexical coverage, while maintaining state-of-the-art performance on BLEU and SARI.
📝 Abstract
Text simplification is crucial for improving accessibility and comprehension for English as a Second Language (ESL) learners. This study goes a step further and aims to facilitate ESL learners' language acquisition by simplification. Specifically, we propose simplifying complex sentences to appropriate levels for learners while also increasing vocabulary coverage of the target level in the simplifications. We achieve this without a parallel corpus by conducting reinforcement learning on a large language model. Our method employs token-level and sentence-level rewards, and iteratively trains the model on its self-generated outputs to guide the model to search for simplification hypotheses that satisfy the target attributes. Experiment results on CEFR-SP and TurkCorpus datasets show that the proposed method can effectively increase the frequency and diversity of vocabulary of the target level by more than $20%$ compared to baseline models, while maintaining high simplification quality.