SEA-LION: Southeast Asian Languages in One Network

๐Ÿ“… 2025-04-08
๐Ÿ“ˆ Citations: 10
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
To address the longstanding marginalization of low-resource Southeast Asian (SEA) languages in large language model (LLM) research, this work introduces the first open-source multilingual LLM family covering all major regional languages. Methodologically, it builds upon dual foundational modelsโ€”Llama-3 and Gemma (8B/9B)โ€”and employs a multi-stage pipeline: continual pretraining, hierarchical instruction fine-tuning, RLHF-based alignment, and parameter-efficient model fusion, enabling unified support for English, Chinese, and 11 SEA languages including Indonesian, Vietnamese, and Thai. Contributions include: (1) the first open-source multilingual LLM architecture with comprehensive SEA language coverage; (2) a progressive post-training framework specifically designed for low-resource language adaptation; and (3) state-of-the-art performance on a dedicated SEA multilingual benchmark, demonstrating substantial gains in both linguistic understanding and generative capability for local languages.

Technology Category

Application Category

๐Ÿ“ Abstract
Recently, Large Language Models (LLMs) have dominated much of the artificial intelligence scene with their ability to process and generate natural languages. However, the majority of LLM research and development remains English-centric, leaving low-resource languages such as those in the Southeast Asian (SEA) region under-represented. To address this representation gap, we introduce Llama-SEA-LION-v3-8B-IT and Gemma-SEA-LION-v3-9B-IT, two cutting-edge multilingual LLMs designed for SEA languages. The SEA-LION family of LLMs supports 11 SEA languages, namely English, Chinese, Indonesian, Vietnamese, Malay, Thai, Burmese, Lao, Filipino, Tamil, and Khmer. Our work leverages large-scale multilingual continued pre-training with a comprehensive post-training regime involving multiple stages of instruction fine-tuning, alignment, and model merging. Evaluation results on multilingual benchmarks indicate that our models achieve state-of-the-art performance across LLMs supporting SEA languages. We open-source the models to benefit the wider SEA community.
Problem

Research questions and friction points this paper is trying to address.

Address under-representation of Southeast Asian languages in LLMs
Develop multilingual LLMs supporting 11 SEA languages
Achieve state-of-the-art performance in SEA language benchmarks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multilingual LLMs for SEA languages
Continued pre-training and fine-tuning
State-of-the-art performance benchmarks
๐Ÿ”Ž Similar Papers
No similar papers found.
Raymond Ng
Raymond Ng
University of British Columbia
data mininghealth informaticsgenomicsNLPtext mining
T
Thanh Ngan Nguyen
AI Singapore, National University of Singapore
Y
Yuli Huang
AI Singapore, National University of Singapore
N
Ngee Chia Tai
AI Singapore, National University of Singapore
W
Wai Yi Leong
AI Singapore, National University of Singapore
W
Wei Qi Leong
AI Singapore, National University of Singapore
X
Xianbin Yong
AI Singapore, National University of Singapore
J
Jian Gang Ngui
AI Singapore, National University of Singapore
Y
Yosephine Susanto
AI Singapore, National University of Singapore
N
Nicholas Cheng
AI Singapore, National University of Singapore
H
Hamsawardhini Rengarajan
AI Singapore, National University of Singapore
Peerat Limkonchotiwat
Peerat Limkonchotiwat
Research Fellow, AI Singapore, National University of Singapore
Evaluation and BenchmarkRepresentation LearningLarge Language ModelMultilingual Learning
A
Adithya Venkatadri Hulagadri
AI Singapore, National University of Singapore
K
Kok Wai Teng
AI Singapore, National University of Singapore
Y
Yeo Yeow Tong
AI Singapore, National University of Singapore
B
Bryan Siow
AI Singapore, National University of Singapore
W
Wei Yi Teo
AI Singapore, National University of Singapore
W
Wayne Lau
AI Singapore, National University of Singapore
C
Choon Meng Tan
AI Singapore, National University of Singapore
B
Brandon Ong
AI Singapore, National University of Singapore
Z
Zhi Hao Ong
AI Singapore, National University of Singapore
J
Jann Railey Montalan
AI Singapore, National University of Singapore
A
Adwin Chan
AI Singapore, National University of Singapore
S
Sajeban Antonyrex
AI Singapore, National University of Singapore
R
Ren Lee
AI Singapore, National University of Singapore
E
Esther Choa
AI Singapore, National University of Singapore
D
David Ong Tat-Wee
AI Singapore, National University of Singapore
B
Bing Jie Darius Liu
AI Singapore, National University of Singapore
W
William Chandra Tjhi
AI Singapore, National University of Singapore
Erik Cambria
Erik Cambria
Professor @ NTU CCDS & Visiting @ MIT Media Lab
Neurosymbolic AIMultimodal InteractionNLPAffective ComputingSentiment Analysis
L
Leslie Teo
AI Singapore, National University of Singapore