🤖 AI Summary
This work proposes a lightweight, modular framework for author style transfer that overcomes the high cost, inflexibility, and poor semantic fidelity of existing approaches. The method trains individual LoRA-based style adapters for high-resource authors and employs a hierarchical blending mechanism to synthesize target writing styles using only a few exemplars. By introducing an interpretable, layer-wise adapter fusion strategy, the framework achieves state-of-the-art performance under low-resource conditions—outperforming current methods, including GPT-5.1—while simultaneously optimizing both style transfer accuracy and semantic preservation.
📝 Abstract
The task of authorship style transfer involves rewriting text in the style of a target author while preserving the meaning of the original text. Existing style transfer methods train a single model on large corpora to model all target styles at once: this high-cost approach offers limited flexibility for target-specific adaptation, and often sacrifices meaning preservation for style transfer. In this paper, we propose AuthorMix: a lightweight, modular, and interpretable style transfer framework. We train individual, style-specific LoRA adapters on a small set of high-resource authors, allowing the rapid training of specialized adaptation models for each new target via learned, layer-wise adapter mixing, using only a handful of target style training examples. AuthorMix outperforms existing, SoTA style-transfer baselines -- as well as GPT-5.1 -- for low-resource targets, achieving the highest overall score and substantially improving meaning preservation.