KoBALT: Korean Benchmark For Advanced Linguistic Tasks

📅 2025-05-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing Korean benchmarks lack linguistic depth and typological grounding, limiting accurate assessment of large language models’ (LLMs) true comprehension in morphologically rich languages. To address this, we introduce KoBALT—the first theory-driven, deep linguistic evaluation benchmark for Korean—covering 24 linguistic phenomena across five domains: syntax, semantics, pragmatics, phonology, and morphology. It comprises 700 expert-crafted, low-n-gram-overlap multiple-choice items. Methodologically, KoBALT integrates formal linguistic theory modeling, cross-domain collaborative annotation, human preference evaluation (N=95), and a standardized zero-shot evaluation protocol. Evaluated on 20 state-of-the-art Korean LLMs, KoBALT reveals substantial performance disparities across domains (e.g., 66% accuracy in semantics vs. 31% in phonology), with a maximum overall accuracy of 61%. Crucially, human judgment scores correlate strongly with KoBALT scores (p<0.01), validating its psychometric validity and diagnostic utility for probing fine-grained linguistic competence.

Technology Category

Application Category

📝 Abstract
We introduce KoBALT (Korean Benchmark for Advanced Linguistic Tasks), a comprehensive linguistically-motivated benchmark comprising 700 multiple-choice questions spanning 24 phenomena across five linguistic domains: syntax, semantics, pragmatics, phonetics/phonology, and morphology. KoBALT is designed to advance the evaluation of large language models (LLMs) in Korean, a morphologically rich language, by addressing the limitations of conventional benchmarks that often lack linguistic depth and typological grounding. It introduces a suite of expert-curated, linguistically motivated questions with minimal n-gram overlap with standard Korean corpora, substantially mitigating the risk of data contamination and allowing a more robust assessment of true language understanding. Our evaluation of 20 contemporary LLMs reveals significant performance disparities, with the highest-performing model achieving 61% general accuracy but showing substantial variation across linguistic domains - from stronger performance in semantics (66%) to considerable weaknesses in phonology (31%) and morphology (36%). Through human preference evaluation with 95 annotators, we demonstrate a strong correlation between KoBALT scores and human judgments, validating our benchmark's effectiveness as a discriminative measure of Korean language understanding. KoBALT addresses critical gaps in linguistic evaluation for typologically diverse languages and provides a robust framework for assessing genuine linguistic competence in Korean language models.
Problem

Research questions and friction points this paper is trying to address.

Evaluates LLMs' Korean linguistic depth across five domains
Mitigates data contamination with expert-curated, unique questions
Reveals performance gaps in Korean language understanding
Innovation

Methods, ideas, or system contributions that make the work stand out.

Expert-curated linguistic questions for Korean evaluation
Minimized n-gram overlap to prevent data contamination
Strong correlation between benchmark scores and human judgments
🔎 Similar Papers
No similar papers found.
H
Hyopil Shin
Seoul National University
S
Sangah Lee
Seoul National University
Dongjun Jang
Dongjun Jang
Seoul National University
Natural Language Processing
W
Wooseok Song
Seoul National University
Jaeyoon Kim
Jaeyoon Kim
KAIST
Computer VisionImage Retrieval
C
Chaeyoung Oh
Seoul National University
H
Hyemi Jo
Seoul National University
Youngchae Ahn
Youngchae Ahn
Seoul National University
NLP
S
Sihyun Oh
Seoul National University
H
Hyohyeong Chang
Seoul National University
Sunkyoung Kim
Sunkyoung Kim
LG AI Research
Large Language ModelCross-lingual TransferDomain AdaptationQuestion Answering
Jinsik Lee
Jinsik Lee
LG AI Research
Natural Language Processing