OpenSeal: Good, Fast, and Cheap Construction of an Open-Source Southeast Asian LLM via Parallel Data

📅 2026-02-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitations of current large language models, which are predominantly English-centric and exhibit suboptimal performance on low-resource Southeast Asian languages, while also lacking fully open training data. The authors propose a novel approach that leverages only parallel corpora for continued pretraining of large language models, demonstrating for the first time that monolingual data is unnecessary for efficient cross-lingual transfer to new languages. Using 34.7 billion tokens of parallel text, they trained OpenSeal—a large language model tailored for Southeast Asian languages—on 8×NVIDIA H200 GPUs over 180 hours. OpenSeal is the first such model with fully open training data and achieves performance comparable to similarly sized models, setting a new benchmark for transparency and multilingual capability in underrepresented linguistic regions.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) have proven to be effective tools for a wide range of natural language processing (NLP) applications. Although many LLMs are multilingual, most remain English-centric and perform poorly on low-resource languages. Recently, several Southeast Asia-focused LLMs have been developed, but none are truly open source, as they do not publicly disclose their training data. Truly open-source models are important for transparency and for enabling a deeper and more precise understanding of LLM internals and development, including biases, generalization, and multilinguality. Motivated by recent advances demonstrating the effectiveness of parallel data in improving multilingual performance, we conduct controlled and comprehensive experiments to study the effectiveness of parallel data in continual pretraining of LLMs. Our findings show that using only parallel data is the most effective way to extend an LLM to new languages. Using just 34.7B tokens of parallel data and 180 hours on 8x NVIDIA H200 GPUs, we built OpenSeal, the first truly open Southeast Asian LLM that rivals the performance of existing models of similar size.
Problem

Research questions and friction points this paper is trying to address.

large language models
low-resource languages
Southeast Asian languages
open-source
multilinguality
Innovation

Methods, ideas, or system contributions that make the work stand out.

parallel data
open-source LLM
Southeast Asian languages
continual pretraining
multilingual modeling
🔎 Similar Papers
No similar papers found.
T
Tan Sang Nguyen
Department of Computer Science, National University of Singapore
M
Muhammad Reza Qorib
Department of Computer Science, National University of Singapore
Hwee Tou Ng
Hwee Tou Ng
Provost's Chair Professor of Computer Science, National University of Singapore
Natural Language ProcessingComputational Linguistics