K-EXAONE Technical Report

📅 2026-01-05
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
This work proposes K-EXAONE, a 236B-parameter multilingual large language model based on a sparsely activated Mixture-of-Experts architecture, which activates only 23B parameters during inference. Designed to meet the growing demands in industrial and scientific applications for models with strong reasoning capabilities, extended context handling, and multilingual support, K-EXAONE accommodates a context length of up to 256K tokens and supports six languages: Korean, English, Spanish, German, Japanese, and Vietnamese. Through large-scale distributed training, multilingual pretraining, and alignment techniques, the model achieves efficient scaling. Evaluations across reasoning, agent-based tasks, general capabilities, and multilingual benchmarks demonstrate that K-EXAONE matches the performance of comparable open-source models, underscoring its potential as a high-performance foundation model.

Technology Category

Application Category

📝 Abstract
This technical report presents K-EXAONE, a large-scale multilingual language model developed by LG AI Research. K-EXAONE is built on a Mixture-of-Experts architecture with 236B total parameters, activating 23B parameters during inference. It supports a 256K-token context window and covers six languages: Korean, English, Spanish, German, Japanese, and Vietnamese. We evaluate K-EXAONE on a comprehensive benchmark suite spanning reasoning, agentic, general, Korean, and multilingual abilities. Across these evaluations, K-EXAONE demonstrates performance comparable to open-weight models of similar size. K-EXAONE, designed to advance AI for a better life, is positioned as a powerful proprietary AI foundation model for a wide range of industrial and research applications.
Problem

Research questions and friction points this paper is trying to address.

multilingual language model
large-scale AI
Mixture-of-Experts
long context window
foundation model
Innovation

Methods, ideas, or system contributions that make the work stand out.

Mixture-of-Experts
multilingual language model
long-context window
large-scale AI model
parameter-efficient inference
🔎 Similar Papers
No similar papers found.
Eunbi Choi
Eunbi Choi
LG AI Research
K
Kibong Choi
LG AI Research
Seokhee Hong
Seokhee Hong
LG AI Research
Natural Language Processing
J
Junwon Hwang
LG AI Research
H
Hyojin Jeon
LG AI Research
H
Hyunjik Jo
LG AI Research
Joonkee Kim
Joonkee Kim
LG AI Research
Language ModelingReinforcement Learning
Seonghwan Kim
Seonghwan Kim
Dept. Chemistry, KAIST
machine learningchemical reactionrepresentation learning
Soyeon Kim
Soyeon Kim
Korea Advanced Institute of Science and Technology
Responsible AIML FairnessDifferential PrivacyLLM Hallucination
SunKyoung Kim
SunKyoung Kim
University of Tsukuba
Human-Robot InteractionHuman-Computer InteractionLearning
Yireun Kim
Yireun Kim
LG AI Research
Deep LearningLLMNLPDatabase
Yongil Kim
Yongil Kim
Seoul National University
Dialog SystemMulti-modal learning
Haeju Lee
Haeju Lee
KAIST, LG AI Research
Jinsik Lee
Jinsik Lee
LG AI Research
Natural Language Processing
K
Kyungmin Lee
LG AI Research
Sangha Park
Sangha Park
Seoul National University
machine learningdeep learningAI safety and reliability
H
Heuiyeen Yeen
LG AI Research
H
Hwan Chang
LG AI Research
Stanley Jungkyu Choi
Stanley Jungkyu Choi
LG AI Research
AINatural Language ProcessingSpeech RecognitionVision
Yejin Choi
Yejin Choi
Stanford University / NVIDIA
Natural Language ProcessingDeep LearningArtificial IntelligenceCommonsense Reasoning
J
Jiwon Ham
LG AI Research
K
Kijeong Jeon
LG AI Research
G
Geunyeong Jeong
LG AI Research
G
Gerrard Jeongwon Jo
LG AI Research
Y
Yonghwan Jo
LG AI Research
J
Jiyeon Jung
LG AI Research
N
Naeun Kang
LG AI Research
D
Dohoon Kim
LG AI Research
E
Euisoon Kim
LG AI Research
H
Hayeon Kim
LG AI Research
H
Hyosang Kim
LG AI Research
H
Hyunseo Kim
LG AI Research
Jieun Kim
Jieun Kim
Associate professor, Hanyang University
UI/UX designInclusive DesignHuman computer interaction
Minu Kim
Minu Kim
KAIST
speech recognitionspeaker verificationphonologylinguistics
M
Myo-Deok Kim
LG AI Research
U
Unsol Kim
LG AI Research
Y
Youchul Kim
LG AI Research
Y
Youngjin Kim
LG AI Research
C
Chaeeun Lee
LG AI Research
C
Chaeyoon Lee
LG AI Research
C
Changhun Lee
LG AI Research
D
Dahm Lee
LG AI Research
E
Edward Hwayoung Lee
LG AI Research
Honglak Lee
Honglak Lee
LG AI Research / U. Michigan
Machine LearningDeep LearningReinforcement LearningComputer VisionArtificial Intelligence
J
Jinsang Lee
LG AI Research
Jiyoung Lee
Jiyoung Lee
Assistant Professor, Ewha Womans University
Multimodal LearningComputer VisionMachine Learning
S
Sangeun Lee
LG AI Research
Seungwon Lim
Seungwon Lim
Yonsei University
NLPMultimodal LearningAgent
S
Solji Lim
LG AI Research
Woohyung Lim
Woohyung Lim
LG AI Research
Deep LearningRepresentation LearningAnomaly DetectionTime-series Forecasting
C
Chanwoo Moon
LG AI Research
J
Jaewoo Park
LG AI Research
J
Jinho Park
LG AI Research
Y
Yongmin Park
LG AI Research
H
Hyerin Seo
LG AI Research
W
Wooseok Seo
LG AI Research
Y
Yongwoo Song
LG AI Research
S
Sejong Yang
LG AI Research
S
Sihoon Yang
LG AI Research
C
Chang En Yea
LG AI Research
S
Sihyuk Yi
LG AI Research
C
Chansik Yoon
LG AI Research
D
Dongkeun Yoon
LG AI Research
Sangyeon Yoon
Sangyeon Yoon
Yonsei University
Hyeongu Yun
Hyeongu Yun
LG AI Research
Large Language Models