AraToken: Optimizing Arabic Tokenization with Normalization Pipeline and Language Extension for Qwen3

📅 2025-12-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the high redundancy and low compression ratio of generic tokenizers on morphologically rich Arabic, this paper proposes an Arabic-optimized SentencePiece Unigram tokenizer and a Language Expansion Pipeline (LEP). We design a dedicated Arabic normalization pipeline—unifying Alif variants, removing diacritics, and normalizing Arabic–Indic numerals—and introduce LEP: initializing new tokens via mean subword embeddings and enabling efficient vocabulary expansion through selective Transformer layer unfreezing. Integrated into Qwen3-0.6B, our tokenizer reduces tokenization fertility by 18% (from 1.199 to 1.35 tokens/word) and lowers validation loss from 8.28 to 2.43 after 800 training steps on 100K samples. All code, the optimized tokenizer, and model checkpoints are publicly released.

Technology Category

Application Category

📝 Abstract
Tokenization is a critical preprocessing step for large language models (LLMs), directly impacting training efficiency and downstream performance. General-purpose tokenizers trained predominantly on English and Latin-script languages exhibit suboptimal performance on morphologically rich languages such as Arabic, resulting in inflated token sequences and reduced compression efficiency. In this work, we present AraToken, an Arabic-optimized tokenizer built on SentencePiece Unigram algorithm with a comprehensive normalization pipeline addressing Arabic-specific orthographic variations including Alif variants, diacritics, and Arabic-Indic numerals. We systematically compare BPE, WordPiece, and SentencePiece algorithms across multiple configurations, demonstrating that SentencePiece with normalization achieves 18% lower fertility (1.199 vs 1.35 tokens/word) compared to unnormalized baselines. Furthermore, we introduce the Language Extension Pipeline (LEP), a method for integrating the optimized tokenizer into Qwen3-0.6B through vocabulary extension with mean subtoken initialization and selective transformer layer unfreezing. Our experiments show that LEP reduces evaluation loss from 8.28 to 2.43 within 800 training steps on 100K Arabic samples. We release our tokenizer, training scripts, and model checkpoints to facilitate Arabic NLP research.
Problem

Research questions and friction points this paper is trying to address.

Optimizes Arabic tokenization for LLMs with normalization
Addresses orthographic variations like Alif and diacritics
Integrates tokenizer into Qwen3 via vocabulary extension
Innovation

Methods, ideas, or system contributions that make the work stand out.

Arabic-specific normalization pipeline for orthographic variations
SentencePiece Unigram algorithm with 18% lower fertility
Language Extension Pipeline integrating tokenizer into Qwen3 model
🔎 Similar Papers
No similar papers found.
M
Mark Kashirskiy
Higher School of Economics, Moscow, Russia
A
Artiom Lipinski
Markov Lab, Saint Petersburg State University, Russia
Ilya Makarov
Ilya Makarov
Principal AI Researcher
Artificial IntelligenceComputer VisionNetwork ScienceGame DesignAugmented Reality