🤖 AI Summary
This paper addresses the challenge of exemplar-driven arbitrary text style transfer by proposing a zero-shot LLM-based style control method grounded in linguistic register analysis. The core method introduces register theory into prompt design for the first time, enabling explicit disentanglement, precise extraction, and controllable rewriting of stylistic features—without fine-tuning or additional training. It integrates register modeling, zero-shot style feature inference, and a multidimensional evaluation framework assessing both style strength and semantic fidelity. On multiple benchmark tasks, our approach achieves a 23.6% improvement in style matching accuracy and an 18.4% gain in semantic preservation over existing prompt-based methods, significantly overcoming the dual limitations of conventional template-based and fine-tuning approaches in generalizability and meaning preservation.
📝 Abstract
Large Language Models (LLMs) have demonstrated strong capabilities in rewriting text across various styles. However, effectively leveraging this ability for example-based arbitrary style transfer, where an input text is rewritten to match the style of a given exemplar, remains an open challenge. A key question is how to describe the style of the exemplar to guide LLMs toward high-quality rewrites. In this work, we propose a prompting method based on register analysis to guide LLMs to perform this task. Empirical evaluations across multiple style transfer tasks show that our prompting approach enhances style transfer strength while preserving meaning more effectively than existing prompting strategies.