🤖 AI Summary
Existing style embedding models are English-only, severely limiting multilingual style analysis and transfer. To address this, we propose MultiStyle—the first unified multilingual style embedding model supporting nine languages. MultiStyle employs synthetic multilingual style data generation, cross-lingual contrastive learning, and style-content disentanglement training to achieve consistent cross-lingual style representation. We introduce STELOrContent, the first multilingual benchmark explicitly designed for style-content separation evaluation. MultiStyle demonstrates strong generalization on downstream tasks—including cross-lingual author verification—achieving significant improvements over baselines on STELOrContent. It exhibits robust zero-shot transfer to unseen languages and stylistic attributes. The model and benchmark are publicly released on Hugging Face.
📝 Abstract
Style embeddings are useful for stylistic analysis and style transfer; however, only English style embeddings have been made available. We introduce Multilingual StyleDistance (mStyleDistance), a multilingual style embedding model trained using synthetic data and contrastive learning. We train the model on data from nine languages and create a multilingual STEL-or-Content benchmark (Wegmann et al., 2022) that serves to assess the embeddings' quality. We also employ our embeddings in an authorship verification task involving different languages. Our results show that mStyleDistance embeddings outperform existing models on these multilingual style benchmarks and generalize well to unseen features and languages. We make our model publicly available at https://huggingface.co/StyleDistance/mstyledistance .