Text Style Transfer with Parameter-efficient LLM Finetuning and Round-trip Translation

📅 2026-02-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of scarce parallel corpora in text style transfer by proposing a novel approach that operates without authentic parallel data. The method leverages back-translation to generate neutral-style texts as shared input representations and integrates parameter-efficient fine-tuning (PEFT) with retrieval-augmented generation (RAG) to enhance terminological consistency and stylistic control. Evaluated across four domains, the proposed framework significantly outperforms zero-shot prompting and few-shot in-context learning (ICL), achieving state-of-the-art performance in both BLEU scores and style accuracy. This advancement circumvents the traditional reliance on manually annotated parallel corpora, offering a scalable and effective solution for style transfer tasks under low-resource conditions.

Technology Category

Application Category

📝 Abstract
This paper proposes a novel method for Text Style Transfer (TST) based on parameter-efficient fine-tuning of Large Language Models (LLMs). Addressing the scarcity of parallel corpora that map between styles, the study employs roundtrip translation to synthesize such parallel datasets from monolingual corpora. This approach creates'neutralized'text devoid of stylistic attributes, essentially creating a shared input style at training-time and inference-time. Experimental results demonstrate consistent superiority of this method over zero-shot prompting and fewshot ICL techniques measured by BLEU scores and style accuracy scores across four investigated domains. Furthermore, the integration of retrieval-augmented generation (RAG) for terminology and name knowledge enhances robustness and stylistic consistency.
Problem

Research questions and friction points this paper is trying to address.

Text Style Transfer
parallel corpora
style mapping
data scarcity
Innovation

Methods, ideas, or system contributions that make the work stand out.

Parameter-efficient fine-tuning
Round-trip translation
Text Style Transfer
Retrieval-augmented generation
Neutralized text
🔎 Similar Papers
No similar papers found.
R
Ruoxi Liu
Department of Computer Science, Johns Hopkins University
Philipp Koehn
Philipp Koehn
Professor, Johns Hopkins University
Machine TranslationNatural Language Processing