VTechAGP: An Academic-to-General-Audience Text Paraphrase Dataset and Benchmark Models

📅 2024-11-07
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the underexplored problem of document-level paraphrasing of academic texts for general audiences. We introduce VTechAGP, the first large-scale, longitudinal dataset comprising thesis–lay-summary pairs spanning 25 years and eight universities. To tackle this task, we propose DSPT5, a lightweight dynamic soft-prompting model that integrates (i) semantic-structural dual-layer crowdsourced decoding, (ii) a contrastive-generative joint loss function, and (iii) domain-adaptive keyword vector learning. Compared to mainstream large language models, DSPT5 achieves state-of-the-art performance on readability, faithfulness, and accessibility—despite having significantly fewer parameters. Our results empirically validate the efficacy of lightweight, context-specific paraphrasing paradigms tailored to scholarly communication.

Technology Category

Application Category

📝 Abstract
Existing text simplification or paraphrase datasets mainly focus on sentence-level text generation in a general domain. These datasets are typically developed without using domain knowledge. In this paper, we release a novel dataset, VTechAGP, which is the first academic-to-general-audience text paraphrase dataset consisting of document-level these and dissertation academic and general-audience abstract pairs from 8 colleges authored over 25 years. We also propose a novel dynamic soft prompt generative language model, DSPT5. For training, we leverage a contrastive-generative loss function to learn the keyword vectors in the dynamic prompt. For inference, we adopt a crowd-sampling decoding strategy at both semantic and structural levels to further select the best output candidate. We evaluate DSPT5 and various state-of-the-art large language models (LLMs) from multiple perspectives. Results demonstrate that the SOTA LLMs do not provide satisfactory outcomes, while the lightweight DSPT5 can achieve competitive results. To the best of our knowledge, we are the first to build a benchmark dataset and solutions for academic-to-general-audience text paraphrase dataset. Models will be public after acceptance.
Problem

Research questions and friction points this paper is trying to address.

Create academic-to-general text paraphrase dataset.
Develop dynamic soft prompt generative model.
Evaluate models for text paraphrase effectiveness.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic soft prompt model
Contrastive-generative loss function
Crowd-sampling decoding strategy
🔎 Similar Papers
No similar papers found.