🤖 AI Summary
This work addresses the limitations of current large language models, which are predominantly English-centric and exhibit suboptimal performance on low-resource Southeast Asian languages, while also lacking fully open training data. The authors propose a novel approach that leverages only parallel corpora for continued pretraining of large language models, demonstrating for the first time that monolingual data is unnecessary for efficient cross-lingual transfer to new languages. Using 34.7 billion tokens of parallel text, they trained OpenSeal—a large language model tailored for Southeast Asian languages—on 8×NVIDIA H200 GPUs over 180 hours. OpenSeal is the first such model with fully open training data and achieves performance comparable to similarly sized models, setting a new benchmark for transparency and multilingual capability in underrepresented linguistic regions.
📝 Abstract
Large language models (LLMs) have proven to be effective tools for a wide range of natural language processing (NLP) applications. Although many LLMs are multilingual, most remain English-centric and perform poorly on low-resource languages. Recently, several Southeast Asia-focused LLMs have been developed, but none are truly open source, as they do not publicly disclose their training data. Truly open-source models are important for transparency and for enabling a deeper and more precise understanding of LLM internals and development, including biases, generalization, and multilinguality. Motivated by recent advances demonstrating the effectiveness of parallel data in improving multilingual performance, we conduct controlled and comprehensive experiments to study the effectiveness of parallel data in continual pretraining of LLMs. Our findings show that using only parallel data is the most effective way to extend an LLM to new languages. Using just 34.7B tokens of parallel data and 180 hours on 8x NVIDIA H200 GPUs, we built OpenSeal, the first truly open Southeast Asian LLM that rivals the performance of existing models of similar size.