🤖 AI Summary
To address the limitations of Arabic AI systems in content generation, dialect identification, information veracity verification, and domain-specific adaptation—particularly in religious and news contexts—this paper introduces the first full-stack, Arabic-centric multimodal generative AI platform. Methodologically, it proposes a novel dual-model collaborative architecture (Fanar Star/Prime), integrates Islamic-knowledge-enhanced RAG with recency-aware RAG for temporal grounding, develops dialect-aware ASR, regionally adapted text-to-image generation, and end-to-end provenance-aware content verification. The platform incorporates custom 7B/9B Arabic LLMs, an intelligent prompt-routing orchestrator, and a content attribution service. Experimental results demonstrate state-of-the-art performance across major Arabic benchmarks: a 32% improvement in religious QA accuracy, minute-level news response latency, and a 41% reduction in cross-dialect ASR word error rate.
📝 Abstract
We present Fanar, a platform for Arabic-centric multimodal generative AI systems, that supports language, speech and image generation tasks. At the heart of Fanar are Fanar Star and Fanar Prime, two highly capable Arabic Large Language Models (LLMs) that are best in the class on well established benchmarks for similar sized models. Fanar Star is a 7B (billion) parameter model that was trained from scratch on nearly 1 trillion clean and deduplicated Arabic, English and Code tokens. Fanar Prime is a 9B parameter model continually trained on the Gemma-2 9B base model on the same 1 trillion token set. Both models are concurrently deployed and designed to address different types of prompts transparently routed through a custom-built orchestrator. The Fanar platform provides many other capabilities including a customized Islamic Retrieval Augmented Generation (RAG) system for handling religious prompts, a Recency RAG for summarizing information about current or recent events that have occurred after the pre-training data cut-off date. The platform provides additional cognitive capabilities including in-house bilingual speech recognition that supports multiple Arabic dialects, voice and image generation that is fine-tuned to better reflect regional characteristics. Finally, Fanar provides an attribution service that can be used to verify the authenticity of fact based generated content. The design, development, and implementation of Fanar was entirely undertaken at Hamad Bin Khalifa University's Qatar Computing Research Institute (QCRI) and was sponsored by Qatar's Ministry of Communications and Information Technology to enable sovereign AI technology development.