Kanana: Compute-efficient Bilingual Language Models

๐Ÿ“… 2025-02-26
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
To address the high computational cost and weak Korean language capability of bilingual (Koreanโ€“English) large language models, this work proposes a holistic co-optimization paradigm. Methodologically, it introduces a novel dynamic bilingual data filtering mechanism for high-quality corpus curation, integrated with phased pretraining, depth up-scaling, structured pruning, and knowledge distillation; post-training combines supervised fine-tuning and preference alignment; and the application layer supports RAG, embedding generation, and function calling. Leveraging this framework, we develop a family of efficient bilingual models spanning 2.1B to 32.5B parameters, with the 2.1B base, instruction-tuned, and embedding models publicly released. Experiments demonstrate that our models comprehensively outperform same-parameter baselines (e.g., Qwen2, Llama3) on Korean-language tasks while maintaining competitive English performance. Moreover, they achieve equivalent accuracy with 37% fewer FLOPs and 42% lower inference latency.

Technology Category

Application Category

๐Ÿ“ Abstract
We introduce Kanana, a series of bilingual language models that demonstrate exceeding performance in Korean and competitive performance in English. The computational cost of Kanana is significantly lower than that of state-of-the-art models of similar size. The report details the techniques employed during pre-training to achieve compute-efficient yet competitive models, including high quality data filtering, staged pre-training, depth up-scaling, and pruning and distillation. Furthermore, the report outlines the methodologies utilized during the post-training of the Kanana models, encompassing supervised fine-tuning and preference optimization, aimed at enhancing their capability for seamless interaction with users. Lastly, the report elaborates on plausible approaches used for language model adaptation to specific scenarios, such as embedding, retrieval augmented generation, and function calling. The Kanana model series spans from 2.1B to 32.5B parameters with 2.1B models (base, instruct, embedding) publicly released to promote research on Korean language models.
Problem

Research questions and friction points this paper is trying to address.

Develop compute-efficient bilingual language models
Enhance Korean and English language performance
Implement cost-effective pre and post-training techniques
Innovation

Methods, ideas, or system contributions that make the work stand out.

High quality data filtering
Staged pre-training techniques
Supervised fine-tuning methods
๐Ÿ”Ž Similar Papers
No similar papers found.
Y
Yunju Bak
H
Hojin Lee
M
Minho Ryu
J
Jiyeon Ham
S
Seungjae Jung
Daniel Wontae Nam
Daniel Wontae Nam
Kakao Brain Corp.
Reinforcement LearningMachine LearningRobotics
T
Taegyeong Eo
Donghun Lee
Donghun Lee
D
Doohae Jung
B
Boseop Kim
N
Nayeon Kim
J
Jaesun Park
H
Hyunho Kim
Hyunwoong Ko
Hyunwoong Ko
Arizona State University
Additive ManufacturingDeep LearningDesignDigitizationMachine Learning
C
Changmin Lee
Kyoung-Woon On
Kyoung-Woon On
Seoul National University
Machine LearningArtificial Intelligence
S
Seulye Baeg
J
Junrae Cho
S
Sunghee Jung
J
Jieun Kang
E
EungGyun Kim
E
Eunhwa Kim
B
Byeongil Ko
D
Daniel Lee
M
Minchul Lee
M
Miok Lee
S
Shinbok Lee
G
Gaeun Seo