Swan and ArabicMTEB: Dialect-Aware, Arabic-Centric, Cross-Lingual, and Cross-Cultural Embedding Models and Benchmarks

📅 2024-11-02
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenges of text embedding for Arabic across multiple dialects, domains, cultures, and languages. To this end, we propose the Swan model family (Swan-Small and Swan-Large) and ArabicMTEB—the first dedicated evaluation benchmark for Arabic text embeddings. Swan builds upon ARBERTv2 and ArMistral architectures, incorporating Arabic-centric, dialect-aware, and culturally sensitive design principles. ArabicMTEB comprises 94 datasets spanning eight task categories, enabling the first systematic assessment of multivarietal Arabic representation learning. Experimental results demonstrate that Swan-Large outperforms mE5-large on most tasks, while Swan-Small consistently surpasses mE5-base—both with significantly reduced inference overhead. This work bridges critical gaps in both Arabic-specific embedding modeling and rigorous, standardized evaluation.

Technology Category

Application Category

📝 Abstract
We introduce {f Swan}, a family of embedding models centred around the Arabic language, addressing both small-scale and large-scale use cases. Swan includes two variants: Swan-Small, based on ARBERTv2, and Swan-Large, built on ArMistral, a pretrained Arabic large language model. To evaluate these models, we propose ArabicMTEB, a comprehensive benchmark suite that assesses cross-lingual, multi-dialectal, multi-domain, and multi-cultural Arabic text embedding performance, covering eight diverse tasks and spanning 94 datasets. Swan-Large achieves state-of-the-art results, outperforming Multilingual-E5-large in most Arabic tasks, while the Swan-Small consistently surpasses Multilingual-E5-base. Our extensive evaluations demonstrate that Swan models are both dialectally and culturally aware, excelling across various Arabic domains while offering significant monetary efficiency. This work significantly advances the field of Arabic language modelling and provides valuable resources for future research and applications in Arabic natural language processing. Our models and benchmark are available at our GitHub page: href{https://github.com/UBC-NLP/swan}{https://github.com/UBC-NLP/swan}
Problem

Research questions and friction points this paper is trying to address.

Develops Arabic-centric embedding models.
Assesses cross-lingual Arabic text performance.
Advances dialect-aware Arabic language modeling.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Arabic-centric embedding models
Comprehensive ArabicMTEB benchmark suite
Dialect-aware and culturally aware models
🔎 Similar Papers
No similar papers found.